Processing math: 67%
Research article

C1;1-smoothness of constrained solutions in the calculus of variations with application to mean field games

  • Received: 23 June 2018 Accepted: 09 September 2018 Published: 31 October 2018
  • We derive necessary optimality conditions for minimizers of regular functionals in the calculus of variations under smooth state constraints. In the literature, this classical problem is widely investigated. The novelty of our result lies in the fact that the presence of state constraints enters the Euler-Lagrange equations as a local feedback, which allows to derive the C1;1-smoothness of solutions. As an application, we discuss a constrained Mean Field Games problem, for which our optimality conditions allow to construct Lipschitz relaxed solutions, thus improving an existence result due to the first two authors.

    Citation: Piermarco Cannarsa, Rossana Capuani, Pierre Cardaliaguet. C1;1-smoothness of constrained solutions in the calculus of variations with application to mean field games[J]. Mathematics in Engineering, 2019, 1(1): 174-203. doi: 10.3934/Mine.2018.1.174

    Related Papers:

    [1] Benoît Perthame, Edouard Ribes, Delphine Salort . Career plans and wage structures: a mean field game approach. Mathematics in Engineering, 2019, 1(1): 38-54. doi: 10.3934/Mine.2018.1.38
    [2] Yves Achdou, Ziad Kobeissi . Mean field games of controls: Finite difference approximations. Mathematics in Engineering, 2021, 3(3): 1-35. doi: 10.3934/mine.2021024
    [3] Diogo Gomes, Julian Gutierrez, Ricardo Ribeiro . A mean field game price model with noise. Mathematics in Engineering, 2021, 3(4): 1-14. doi: 10.3934/mine.2021028
    [4] Mario Pulvirenti . On the particle approximation to stationary solutions of the Boltzmann equation. Mathematics in Engineering, 2019, 1(4): 699-714. doi: 10.3934/mine.2019.4.699
    [5] Pablo Blanc, Fernando Charro, Juan J. Manfredi, Julio D. Rossi . Games associated with products of eigenvalues of the Hessian. Mathematics in Engineering, 2023, 5(3): 1-26. doi: 10.3934/mine.2023066
    [6] François Murat, Alessio Porretta . The ergodic limit for weak solutions of elliptic equations with Neumann boundary condition. Mathematics in Engineering, 2021, 3(4): 1-20. doi: 10.3934/mine.2021031
    [7] Dmitrii Rachinskii . Bifurcation of relative periodic solutions in symmetric systems with hysteretic constitutive relations. Mathematics in Engineering, 2025, 7(2): 61-95. doi: 10.3934/mine.2025004
    [8] Isabeau Birindelli, Kevin R. Payne . Principal eigenvalues for k-Hessian operators by maximum principle methods. Mathematics in Engineering, 2021, 3(3): 1-37. doi: 10.3934/mine.2021021
    [9] Luis Silvestre, Stanley Snelson . Solutions to the non-cutoff Boltzmann equation uniformly near a Maxwellian. Mathematics in Engineering, 2023, 5(2): 1-36. doi: 10.3934/mine.2023034
    [10] Serena Della Corte, Antonia Diana, Carlo Mantegazza . Global existence and stability for the modified Mullins–Sekerka and surface diffusion flow. Mathematics in Engineering, 2022, 4(6): 1-104. doi: 10.3934/mine.2022054
  • We derive necessary optimality conditions for minimizers of regular functionals in the calculus of variations under smooth state constraints. In the literature, this classical problem is widely investigated. The novelty of our result lies in the fact that the presence of state constraints enters the Euler-Lagrange equations as a local feedback, which allows to derive the C1;1-smoothness of solutions. As an application, we discuss a constrained Mean Field Games problem, for which our optimality conditions allow to construct Lipschitz relaxed solutions, thus improving an existence result due to the first two authors.


    The centrality of necessary conditions in optimal control is well-known and has originated an immense literature in the fields of optimization and nonsmooth analysis, see, e.g., [3,16,17,29,33,35].

    In control theory, the celebrated Pontryagin Maximum Principle plays the role of the classical Euler-Lagrange equations in the calculus of variations. In the case of unrestricted state space, such conditions provide Lagrange multipliers---the so-called co-states---in the form of solutions to a suitable adjoint system satisfying a certain transversality condition. Among various applications of necessary optimality conditions is the deduction of further regularity properties for minimizers which, a priori, would just be absolutely continuous.

    When state constraints are present, a large body of results provide adaptations of the Pontryagin Principle by introducing appropriate corrections in the adjoint system. The price to pay for such extensions usually consists of reduced regularity for optimal trajectories which, due to constraint reactions, turn out to be just Lipschitz continuous while the associated co-states are of bounded variation, see [20].

    The maximum principle under state constraints was first established by Dubovitskii and Milyutin [17] (see also the monograph [35] for different forms of such a result). It may happen that the maximum principle is degenerate and does not yield much information (abnormal maximum principle). As explained in [8,10,18,19] in various contexts, the so-called "inward pointing condition" generally ensures the normality of the maximum principle under state constraints. In our setting (calculus of variation problem, with constraints on positions but not on velocities), this will never be an issue. The maximum principle under state constraints generally involves an adjoint state which is the sum of a W1,1 map and a map of bounded variation. This latter mapping may be very irregular and have infinitely many jumps [32], which allows for discontinuities in optimal controls. However, under suitable assumptions (requiring regularity of the data and the affine dynamics with respect to controls), it has been shown that optimal controls and the corresponding adjoint states are continuous, and even Lipschitz continuous: see the seminal work by Hager [22] (in the convex setting) and the subsequent contributions by Malanowski [31] and Galbraith and Vinter [21] (in much more general frameworks). Generalization to less smooth frameworks can also be found in [9,18].

    Let ΩRn be a bounded open domain with C2 boundary. Let Γ be the metric subspace of AC(0,T;Rn) defined by

    Γ={γAC(0,T;Rn): γ(t)¯Ω,  t[0,T]},

    with the uniform metric. For any x¯Ω, we set

    Γ[x]={γΓ:γ(0)=x}.

    We consider the problem of minimizing the classical functional of the calculus of variations

    J[γ]=T0f(t,γ(t),˙γ(t))dt+g(γ(T)).

    Let URn be an open set such that ¯ΩU. Given x¯Ω, we consider the constrained minimization problem

    infγΓ[x]J[γ],    where     J[γ]={T0f(t,γ(t),˙γ(t))dt+g(γ(T))}, (1.1)

    where f:[0,T]×U×RnR and g:UR.In this paper, we obtain a certain formulation of the necessary optimality conditions for the above problem, which are particularly useful to study the regularity of minimizers. More precisely, given a minimizer γΓ[x] of (1.1), we prove that there exists a Lipschitz continuous arc p:[0,T]Rn such that

    {˙γ(t)=DpH(t,γ(t),p(t))    for all t[0,T]˙p(t)=DxH(t,γ(t),p(t))Λ(t,γ,p)1Ω(γ)DbΩ(γ(t))for a. e. t[0,T] (1.2)

    where Λ is a bounded continuous function independent of γ and p (Theorem 3.1). By the above necessary conditions we derive a sort of maximal regularity, showing that any solutions γ is of class C1,1. As is customary in this kind of problems, the proof relies on the analysis of suitable penalized functional which has the following form:

    infγAC(0,T;Rn)γ(0)=x{T0[f(t,γ(t),˙γ(t))+1ϵ dΩ(γ(t))]dt+1δ dΩ(γ(T))+g(γ(T))}.

    Then, we show that all solutions of the penalized problem remain in ¯Ω (Lemma3.7).

    A direct consequence of our necessary conditions is the Lipschitz regularity of the value function associated to (1.1) (Proposition 4.1).

    Our interest is also motivated by application to mean field games, as we explain below. Mean field games (MFG) theory has been developed simultaneously by Lasry and Lions ([25,26,27]) and by Huang, Malhamé and Caines ([23,24]) in order to study differential games with an infinite number of rational players in competition. The simplest MFG model leads to systems of partial differential equations involving two unknown functions: the value function u of an optimal control problem of a typical player and the density m of the population of players. In the presence of state constraints, the usual construction of solutions to the MFG system has to be completely revised because the minimizers of the problem lack many of the good properties of the unconstrained case. Such constructions are discussed in detail in [11], where a relaxed notion of solution to the constrained MFG problem was introduced following the so-called Lagrangian formulation (see [4,5,6,7,13,14]. In this paper, applying our necessary conditions, we deduce the existence of more regular solutions than those constructed in [11], assuming data to be Lipschitz continuous.

    This paper is organised as follows. In Section 2, we introduce the notation and recall preliminary results. In Section 3, we derive necessary conditions for the constrained problem. Moreover, we prove the C1,1-smoothness of minimizers. In Section 4, we apply our necessary conditions to obtain the Lipschitz regularity of the value function for the constrained problem. Furthermore, we deduce the existence of more regular constrained MFG equilibria. Finally, in the Appendix, we prove a technical result on limiting subdifferentials.

    Throughout this paper we denote by || and , respectively, the Euclidean norm and scalar product in Rn. Let ARn×n be a matrix. We denote by |||| the norm of A defined as follows

    ||A||=maxxRn,|x|=1||Ax|| .

    For any subset SRn, ¯S stands for its closure, S for its boundary, and Sc for RnS. We denote by 1S:Rn{0,1} the characteristic function of S, i.e.,

    1S(x)={1   xS,0xSc.

    We write AC(0,T;Rn) for the space of all absolutely continuous Rn-valued functions on [0,T], equipped with the uniform norm ||γ||=sup[0,T] |γ(t)|. We observe that AC(0,T;Rn) is not a Banach space.

    Let U be an open subset of Rn. C(U) is the space of all continuous functions on U and Cb(U) is the space of all bounded continuous functions on U. Ck(U) is the space of all functions ϕ:UR that are k-times continuously differentiable. Let ϕC1(U). The gradient vector of ϕ is denoted by Dϕ=(Dx1ϕ,,Dxnϕ), where Dxiϕ=ϕxi. Let ϕCk(U) and let α=(α1,,αn)Nn be a multiindex. We define Dαϕ=Dα1x1Dαnxnϕ. Ckb(U) is the space of all function ϕCk(U) and such that

    ϕk,:=supxU|α|k|Dαϕ(x)|<

    Let Ω be a bounded open subset of Rn with C2 boundary. C1,1(¯Ω) is the space of all the functions C1 in a neighborhood U of Ω and with locally Lipschitz continuous first order derivates in U.

    The distance function from ¯Ω is the function dΩ:Rn[0,+[ defined by

    dΩ(x):=infy¯Ω|xy|     (xRn).

    We define the oriented boundary distance from Ω by

    bΩ(x)=dΩ(x)dΩc(x)    (xRn).

    We recall that, since the boundary of Ω is of class C2, there exists ρ0>0 such that

    bΩ()C2b  on  Σρ0={yB(x,ρ0):xΩ}. (2.1)

    Throughout the paper, we suppose that ρ0 is fixed so that (2.1) holds.

    Take a continuous function f:RnR and a point xRn. A vector pRn is said to be a proximal subetaadient of f at x if there exists ϵ>0 and C0 such that

    p(yx)f(y)f(x)+C|yx|2  for all y that satisfy |yx|ϵ.

    The set of all proximal subetaadients of f at x is called the proximal subdifferential of f at x and is denoted by pf(x). A vector pRn is said to be a limiting subetaadient of f at x if there exist sequences xiRn, pipf(xi) such that xix and pip (i).

    The set of all limiting subetaadients of f at x is called the limiting subdifferential and is denoted by f(x).In particular, for the distance function we have the following result.

    Lemma 2.1. Let Ω be a bounded open subset of Rn with C2 boundary. Then, for every xRn it holds

    pdΩ(x)=dΩ(x)={DbΩ(x)     0<bΩ(x)<ρ0,DbΩ(x)[0,1]xΩ,0xΩ,

    where ρ0 is as in (2.1) and DbΩ(x)[0,1] denotes the set {DbΩ(x)α : α[0,1]}.

    The proof is given in the Appendix.

    Let X be a separable metric space. Cb(X) is the space of all bounded continuous functions on X. We denote by B(X) the family of the Borel subset of X and by P(X) the family of all Borel probability measures on X. The support of ηP(X), supp(η), is the closed set defined by

    supp(η):={xX:η(V)>0 for each neighborhood V of x}.

    We say that a sequence (ηi)P(X) is narrowly convergent to ηP(X) if

    limiXf(x)dηi(x)=Xf(x)dη    fCb(X).

    We denote by d1 the Kantorovich-Rubinstein distance on X, which---when X is compact---can be characterized as follows

    d1(m,m)=sup{Xf(x)dm(x)Xf(x)dm(x) | f:XR  is 1-Lipschitz}, (2.2)

    for all m,mP(X).

    Let Ω be a bounded open subset of Rn with C2 boundary. We write Lip(0,T;P(¯Ω)) for the space of all maps m:[0,T]P(¯Ω) that are Lipschitz continuous with respect to d1, i.e.,

    d1(m(t),m(s))C|ts|,    t,s[0,T], (2.3)

    for some constant C0. We denote by Lip(m) the smallest constant that verifies (2.3).

    Let ΩRn be a bounded open set with C2 boundary. Let Γ be the metric subspace of AC(0,T;Rn) defined by

    Γ={γAC(0,T;Rn): γ(t)¯Ω,  t[0,T]}.

    For any x¯Ω, we set

    Γ[x]={γΓ:γ(0)=x}.

    Let URn be an open set such that ¯ΩU. Given x¯Ω, we consider the constrained minimization problem

    infγΓ[x]J[γ],    where     J[γ]={T0f(t,γ(t),˙γ(t))dt+g(γ(T))}. (3.1)

    We denote by X[x] the set of solutions of (3.1), that is

    X[x]={γΓ[x]:J[γ]=infΓ[x]J[γ]}.

    We assume that f:[0,T]×U×RnR and g:UR satisfy the following conditions.

    (g1) gC1b(U)

    (f0) fC([0,T]×U×Rn) and for all t[0,T] the function (x,v)f(t,x,v) is differentiable. Moreover, Dxf, Dvf are continuous on [0,T]×U×Rn and there exists a constant M0 such that

    |f(t,x,0)|+|Dxf(t,x,0)|+|Dvf(t,x,0)|M     (t,x)[0,T]×U. (3.2)

    (f1) For all t[0,T] the map (x,v)Dvf(t,x,v) is continuously differentiable and there exists a constant μ1 such that

    IμD2vvf(t,x,v)Iμ, (3.3)
    ||D2vxf(t,x,v)||μ(1+|v|), (3.4)

    for all (t,x,v)[0,T]×U×Rn, where I denotes the identity matrix.

    (f2) For all (x,v)U×Rn the function tf(t,x,v) and the map tDvf(t,x,v) are Lipschitz continuous. Moreover, there exists a constant κ0 such that

    |f(t,x,v)f(s,x,v)|κ(1+|v|2)|ts| (3.5)
    |Dvf(t,x,v)Dvf(s,x,v)|κ(1+|v|)|ts| (3.6)

    for all t, s[0,T], xU, vRn.

    Remark 3.1. By classical results in the calculus of variation (see, e.g., [15, Theorem 11.1i]), there exists at least one minimizer of (3.1) in Γ for any fixed point x¯Ω.

    In the next lemma we show that (f0)-(f2) imply the useful growth conditions for f and for its derivatives.

    Lemma 3.1. Suppose that (f0)-(f2) hold. Then, there exists a positive constant C(μ,M) depending only on μ and M such that

    |Dvf(t,x,v)|C(μ,M)(1+|v|), (3.7)
    |Dxf(t,x,v)|C(μ,M)(1+|v|2), (3.8)
    14μ|v|2C(μ,M)f(t,x,v)4μ|v|2+C(μ,M), (3.9)

    for all (t,x,v)[0,T]×U×Rn.

    Proof. By (3.2), and by (3.3) one has that

    |Dvf(t,x,v)||Dvf(t,x,v)Dvf(t,x,0)|+|Dvf(t,x,0)|10|D2vvf(t,x,τv)||v|dτ+|Dvf(t,x,0)|μ|v|+MC(μ,M)(1+|v|)

    and so (3.7) holds. Furthermore, by (3.2), and by (3.4) we have that

    |Dxf(t,x,v)||Dxf(t,x,v)Dxf(t,x,0)|+|Dxf(t,x,0)|10|D2xvf(t,x,τv)||v|dτ+Mμ(1+|v|)|v|+MC(μ,M)(1+|v|2).

    Therefore, (3.8) holds. Moreover, fixed vRn there exists a point ξ of the segment with endpoints 0, v such that

    f(t,x,v)=f(t,x,0)+Dvf(t,x,0),v+12D2vvf(t,x,ξ)v,v.

    By (3.2), (3.3), and by (3.7) we have that

    C(μ,M)+14μ|v|2MC(μ,M)|v|+12μ|v|2f(t,x,v)M+C(μ,M)|v|+μ2|v|2C(μ,M)+4μ|v|2,

    and so (3.9) holds. This completes the proof.

    In the next result we show a special property of the minimizers of (3.1).

    Lemma 3.2. For any x¯Ω and for any γX[x] we have that

    T014μ|˙γ(t)|2dtK,

    where

    K:=T(C(μ,M)+M)+2maxU|g(x)|. (3.10)

    Proof. Let x¯Ω and let γX[x]. By comparing the cost of γ with the cost of the constant trajectory γ(t)x, one has that

    T0f(t,γ(t),˙γ(t))dt+g(γ(T))T0f(t,x,0)dt+g(x)Tmax[0,T]×U|f(t,x,0)|+maxU|g(x)|. (3.11)

    Using (3.2) and (3.9) in (3.11), one has that

    T014μ|˙γ(t)|2dtK,

    where

    K:=T(C(μ,M)+M)+2maxU|g(x)|.

    We denote by H:[0,T]×U×RnR the Hamiltonian

    H(t,x,p)=supvRn{p,vf(t,x,v)}, (t,x,p)[0,T]×U×Rn.

    Our assumptions on f imply that H satisfies the following conditions.

    (H0) HC([0,T]×U×Rn) and for all t[0,T] the function (x,p)H(t,x,p) is differentiable. Moreover, DxH, DpH are continuous on [0,T]×U×Rn and there exists a constant M0 such that

    |H(t,x,0)|+|DxH(t,x,0)|+|DpH(t,x,0)|M     (t,x)[0,T]×U. (3.12)

    (H1) For all t[0,T] the map (x,p)DpH(t,x,p) is continuously differentiable and

    IμD2ppH(t,x,p)Iμ, (3.13)
    ||D2pxH(t,x,p)||C(μ,M)(1+|p|), (3.14)

    for all (t,x,p)[0,T]×U×Rn, where μ is the constant given in (f1) and C(μ,M) depends only on μ and M.

    (H2) For all (x,p)U×Rn the function tH(t,x,p) and the map tDpH(t,x,p) are Lipschitz continuous. Moreover

    |H(t,x,p)H(s,x,p)|κC(μ,M)(1+|p|2)|ts|, (3.15)
    |DpH(t,x,p)DpH(s,x,p)|κC(μ,M)(1+|p|)|ts|, (3.16)

    for all t, s[0,T], xU, pRn, where κ is the constant given in (f2) and C(μ,M) depends only on μ and M.

    Remark 3.2. Arguing as in Lemma 3.1 we deduce that

    |DpH(t,x,p)|C(μ,M)(1+|p|), (3.17)
    |DxH(t,x,p)|C(μ,M)(1+|p|2), (3.18)
    14μ|p|2C(μ,M)H(t,x,p)4μ|p|2+C(μ,M), (3.19)

    for all (t,x,p)[0,T]×U×Rn and C(μ,M) depends only on μ and M.

    Under the above assumptions on Ω, f and g our necessary conditions can be stated as follows.

    Theorem 3.1. For any x¯Ω and any γX[x] the following holds true.

    (i) γ is of class C1,1([0,T];¯Ω).

    (ii) There exist:

    (a) a Lipschitz continuous arc p:[0,T]Rn,

    (b) a constant νR such that

    0νmax{1,2μ supxU|DpH(T,x,Dg(x))|},

    which satisfy the adjoint system

    {˙γ=DpH(t,γ,p)   for all t[0,T],˙p=DxH(t,γ,p)Λ(t,γ,p)1Ω(γ)DbΩ(γ)for a.e. t[0,T], (3.20)

    and the transversality condition

    p(T)=Dg(γ(T))+νDbΩ(γ(T))1Ω(γ(T)),

    where Λ:[0,T]×Σρ0×RnR is a bounded continuous function independent of γ and p.

    Moreover,

    (iii) the following estimate holds

    ||˙γ||L,   γX[x], (3.21)

    where L=L(μ,M,M,κ,T,||Dg||,||g||).

    The (feedback) function Λ in (3.20) can be computed explicitly, see Remark 3.4 below.

    In this section, we prove Theorem 3.1 in the special case of U=Rn. The proof for a general open set U will be given in the next section.

    The proof is based on [12, Theorem 2.1] where the Maximum Principle under state constraints is obtained for a Mayer problem. The reasoning requires several intermediate steps.

    Fix x¯Ω. The key point is to approximate the constrained problem by penalized problems as follows

    infγAC(0,T;Rn)γ(0)=x{T0[f(t,γ(t),˙γ(t))+1ϵ dΩ(γ(t))]dt+1δ dΩ(γ(T))+g(γ(T))}. (3.22)

    Then, we will show that, for ϵ>0 and δ(0,1] small enough, the solutions of the penalized problem remain in ¯Ω.

    Observe that the Hamiltonian associated with the penalized problem is given by

    Hϵ(t,x,p)=supvRn{p,vf(t,x,v)1ϵ dΩ(x)}=H(t,x,p)1ϵ dΩ(x), (3.23)

    for all (t,x,p)[0,T]×Rn×Rn.

    By classical results in the calculus of variation (see, e.g., [15, Section 11.2]), there exists at least one mimimizer of (3.22) in AC(0,T;Rn) for any fixed initial point x¯Ω. We denote by Xϵ,δ[x] the set of solutions of (3.22).

    Remark 3.3. Arguing as in Lemma 3.2 we have that, for any x¯Ω, all γXϵ,δ[x] satisfy

    T0[14μ|˙γ(t)|2+1ϵ dΩ(γ(t))]dtK, (3.24)

    where K is the constant given in (3.10).

    The first step of the proof consists in showing that the solutions of the penalized problem remain in a neighborhood of ¯Ω.

    Lemma 3.3. Let ρ0 be such that (2.1) holds. For any ρ(0,ρ0], there exists ϵ(ρ)>0 such that for all ϵ(0,ϵ(ρ)] and all δ(0,1] we have that

     x¯Ω, γXϵ,δ[x]    supt[0,T]dΩ(γ(t))ρ. (3.25)

    Proof. We argue by contradiction. Assume that, for some ρ>0, there exist sequences {ϵk}, {δk}, {tk}, {xk} and {γk} such that

    ϵk0, δk>0, tk[0,T], xk¯Ω, γkXϵk,δk[xk] and dΩ(γk(tk))>ρ,   for all k1.

    By Remark 3.3, one has that for all k1

    T0[14μ|˙γk(t)|2+1ϵk dΩ(γk(t))]dtK,

    where K is the constant given in (3.10). The above inequality implies that γk is 1/2Hölder continuous with Hölder constant (4μK)1/2. Then, by the Lipschitz continuity of dΩ and the regularity of γk, we have that

    dΩ(γk(tk))dΩ(γk(s))(4μK)1/2|tks|1/2,  s[0,T].

    Since dΩ(γk(tk))>ρ, one has that

    dΩ(γk(s))>ρ(4μK)1/2|tks|1/2.

    Hence, dΩ(γk(s))ρ/2 for all sJ:=[tkρ216μK,tk+ρ216μK][0,T] and all k1. So,

    K1ϵkT0dΩ(γk(t))dt1ϵkJdΩ(γk(t))dt1ϵkρ332μK.

    But the above inequality contradicts the fact that ϵk0. So, (3.25) holds true.

    In the next lemma, we show the necessary conditions for the minimizers of the penalized problem.

    Lemma 3.4. Let ρ(0,ρ0] and let ϵ(0,ϵ(ρ)], where ϵ(ρ) is given by Lemma 3.3. Fix δ(0,1], let x0¯Ω, and let γXϵ,δ[x0]. Then,

    (i) γ is of class C1,1([0,T];Rn);

    (ii) there exists an arc pLip(0,T;Rn), a measurable map λ:[0,T][0,1], and a constant β[0,1] such that

    {˙γ(t)=DpH(t,γ(t),p(t)),   for all t[0,T],˙p(t)=DxH(t,γ(t),p(t))λ(t)ϵ DbΩ(γ(t)),for a.e. t[0,T],p(T)=Dg(γ(T))+βδ DbΩ(γ(T)), (3.26)

    where

    λ(t){{0}ifγ(t)Ω,{1}if0<dΩ(γ(t))<ρ,[0,1]ifγ(t)Ω, (3.27)

    and

    β{{0}ifγ(T)Ω,{1}if0<dΩ(γ(T))<ρ,[0,1]ifγ(T)Ω. (3.28)

    Moreover,

    (iii) the function

    r(t):=H(t,γ(t),p(t))1ϵ dΩ(γ(t)),   t[0,T]

    belongs to AC(0,T;R) and satisfies

    T0|˙r(t)|dtκ(T+4μK),

    where K is the constant given in (3.10) and μ, κ are the constants in (3.5) and (3.9), respectively;

    (iv) the following estimate holds

    |p(t)|24μ[1ϵdΩ(γ(t))+C1δ2],     t[0,T], (3.29)

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK).

    Proof. In order to use the Maximum Principle in the version of [35, Theorem 8.7.1], we rewrite (3.22) as a Mayer problem in a higher dimensional state space. Define X(t)Rn×R as

    X(t)=(γ(t)z(t)),

    where z(t)=t0[f(s,γ(s),˙γ(s))+1ϵ dΩ(γ(s))]ds. Then the state equation becomes

    {˙X(t)=(˙γ(t)˙z(t))=Fϵ(t,X(t),u(t)),X(0)=(x00).

    where

    Fϵ(t,X,u)=(uLϵ(t,x,u))

    and Lϵ(t,x,u)=f(t,x,u)+1ϵ dΩ(x) for X=(x,z) and (t,x,z,u)[0,T]×Rn×R×Rn. Thus, (3.22) can be written as

    min{Φ(Xu(T)):uL1}, (3.30)

    where Φ(X)=g(x)+1δ dΩ(x)+z for any X=(x,z)Rn×R. The associated unmaximized Hamiltonian is given by

    Hϵ(t,X,P,u)=P,Fϵ(t,X,u),(t,X,P,u)[0,T]×Rn+1×Rn+1×Rn.

    We observe that, as γ() is minimizer for (3.22), X is minimizer for (3.30). Hence, the hypotheses of [35, Theorem 8.7.1] are satisfied. It follows that there exist P()=(p(),b())AC(0,T;Rn+1), r()AC(0,T;R), and λ00 such that

    (ⅰ) (P,λ0)(0,0),

    (ⅱ) (˙r(t),˙P(t))co t,XHϵ(t,X(t),P(t),˙γ(t)), a.e t[0,T],

    (ⅲ) P(T)λ0Φ(Xu(T)),

    (ⅳ) Hϵ(t,X(t),P(t),˙γ(t))=maxuRnHϵ(t,X(t),P(t),u), a.e. t[0,T],

    (ⅴ)Hϵ(t,X(t),P(t),˙γ(t))=r(t), a.e. t[0,T],

    where t,XHϵ and Φ denote the limiting subdifferential of Hϵ and Φ with respect to (t,X) and X respectively, while co stands for the closed convex hull. Using the definition of Hϵ we have that

    (p,b,λ0)(0,0,0), (3.31)
    (˙r(t),˙p(t))b(t) co t,xLϵ(t,γ(t),˙γ(t)), (3.32)
    ˙b(t)=0, (3.33)
    p(T)λ0 (g+1δ dΩ)(γ(T)), (3.34)
    b(T)=λ0, (3.35)
    r(t)=Hϵ(t,γ(t),p(t)), (3.36)

    where t,xLϵ and (g+1δ dΩ) stands for the limiting subdifferential of Lϵ(,,u) and g()+1δdΩ(). We claim that λ0>0. Indeed, suppose that λ0=0. Then b0 by (3.33) and (3.35). Moreover, p(T)=0 by (3.34). It follows from (3.32) that p0, which is in contradiction with (3.31). So, λ0>0 and we may rescale p and b so that b(t)=λ0=1 for any t[0,T].

    Note that the Weierstrass Condition (ⅳ) becomes

    p(t),˙γ(t)f(t,γ(t),˙γ(t))=supuRn{p(t),uf(t,γ(t),u)}. (3.37)

    Therefore

    ˙γ(t)=DpH(t,γ(t),p(t)),a.e.t[0,T]. (3.38)

    By Lemma 2.1, by the definition of ρ, and by (3.5) we have that

    t,xLϵ(t,x,u){[κ(1+|u|2),κ(1+|u|2)]×Dxf(t,x,u)ifxΩ,[κ(1+|u|2),κ(1+|u|2)]×(Dxf(t,x,u)+1ϵ DbΩ(x))if0<bΩ(x)<ρ,[κ(1+|u|2),κ(1+|u|2)]×(Dxf(t,x,u)+1ϵ[0,1] DbΩ(x))ifxΩ.

    Thus (3.32) implies that there exists λ(t)[0,1] as in (3.27) such that

    |˙r(t)|κ(1+|˙γ(t)|2),  t[0,T], (3.39)
    ˙p(t)=Dxf(t,γ(t),˙γ(t))λ(t)ϵ DbΩ(γ(t)), a.e. t[0,T]. (3.40)

    Hence, by (3.39), and by Remark 3.3 we conclude that

    T0|˙r(t)|dtκT0(1+|˙γ(t)|2)dtκ(T+4μK). (3.41)

    Moreover, by Lemma 2.1, and by assumption on g, one has that

    (g+1δ dΩ)(x){Dg(x)ifxΩ,Dg(x)+1δ DbΩ(x)if0<bΩ(x)<ρ,Dg(x)+1δ[0,1] DbΩ(x)ifxΩ.

    So, by (3.34), there exists β[0,1] as in (3.28) such that

    p(T)=Dg(x)+βδ DbΩ(x). (3.42)

    Finally, by well-known properties of the Legendre transform one has that

    DxH(t,x,p)=Dxf(t,x,DpH(t,x,p)).

    So, recalling (3.38), (3.40) can be rewritten as

    ˙p(t)=DxH(t,γ(t),p(t))λ(t)ϵ DbΩ(γ(t)),a.e. t[0,T].

    We have to prove estimate (3.29). Recalling (3.23) and (3.19), we have that

    Hϵ(t,γ(t),p(t))=H(t,γ(t),p(t))1ϵ dΩ(γ(t))14μ|p(t)|2C(μ,M)1ϵ dΩ(γ(t)).

    So, using (3.41) one has that

    |Hϵ(T,γ(T),p(T))Hϵ(t,γ(t),p(t))|=|r(T)r(t)|Tt|˙r(s)|dsκ(T+4μK).

    Moreover, (3.42) implies that |p(T)|1δ+||Dg||. Therefore, using again (3.19), we obtain

    14μ|p(t)|2C(μ,M)1ϵ dΩ(γ(t))Hϵ(t,γ(t),p(t))Hϵ(T,γ(T),p(T))+κ(T+4μK)4μ|p(T)|2+C(μ,M)+κ(T+4μK)8μ[1δ2+||Dg||2]+C(μ,M)+κ(T+4μK).

    Hence,

    |p(t)|24μ[1ϵdΩ(γ(t))+C1δ2],

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK). This completes the proof of (3.29).

    Finally, by the regularity of H, we have that pLip(0,T;Rn). So, γC1,1([0,T];Rn). Observing that the right-hand side of the equality ˙γ(t)=DpH(t,γ(t),p(t)) is continuous we conclude that this equality holds for all t in [0,T].

    Lemma 3.5. Let ρ(0,ρ0] and let ϵ(0,ϵ(ρ)], where ϵ(ρ) is given by Lemma 3.3. Fix δ(0,1], let x¯Ω, and let γXϵ,δ[x]. If γ(¯t)Ω for some ¯t[0,T], then there exists τ>0 such that γC2((¯tτ,¯t+τ)[0,T];Rn).

    Proof. Let γXϵ,δ[x] and let ¯t[0,T] be such that γ(¯t)Ω(Rn¯Ω). If γ(¯t)Rn¯Ω, then there exists τ>0 such that γ(t)Rn¯Ω for all tI:=(¯tτ,¯t+τ)[0,T]. By Lemma 3.4, we have that there exists pLip(0,T;Rn) such that

    ˙γ(t)=DpH(t,γ(t),p(t)),˙p(t)=DxH(t,γ(t),p(t))1ϵDbΩ(γ(t)),

    for tI. Since p(t) is Lipschitz continuous for tI, and ˙γ(t)=DpH(t,γ(t),p(t)), then γ belongs to C1(I;Rn). Moreover, by the regularity of H, bΩ, p, and γ one has that ˙p(t) is continuous for tI. Then pC1(I;Rn). Hence, ˙γC1(I;Rn). So, γC2(I;Rn). Finally, if γ(¯t)Ω, the conclusion follows by a similar argument.

    In the next two lemmas, we show that, for ϵ>0 and δ(0,1] small enough, any solution γ of problem (3.22) belongs to ¯Ω for all t[0,T]. For this we first establish that, if δ(0,1] is small enough and γ(T)¯Ω, then the function tbΩ(γ(t)) has nonpositive slope at t=T. Then we prove that the entire trajectory γ remains in ¯Ω provided ϵ is small enough. Hereafter, we set

    ϵ0=ϵ(ρ0),   where ρ0 is such that (2.1) holds and ϵ() is given by Lemma 3.3.

    Lemma 3.6. Let

    δ=12μN1, (3.43)

    where

    N=supxRn|DpH(T,x,Dg(x))|.

    Fix any δ1(0,δ] and let x¯Ω. Let ϵ(0,ϵ0]. If γXδ1,ϵ[x] is such that γ(T)¯Ω, then

    ˙γ(T),DbΩ(γ(T))0.

    Proof. As γ(T)¯Ω, by Lemma 3.4 we have that p(T)=Dg(γ(T))+1δ DbΩ(γ(T)). Hence,

    DpH(T,γ(T),p(T)),DbΩ(γ(T))=DpH(T,γ(T),Dg(γ(T))),DbΩ(γ(T))+DpH(T,γ(T),Dg(γ(T))+1δ DbΩ(γ(T)))DpH(T,γ(T),Dg(γ(T))),DbΩ(γ(T)).

    Recalling that D2ppH(t,x,p)Iμ, one has that

    DpH(T,γ(T),Dg(γ(T))+1δ DbΩ(γ(T)))DpH(T,γ(T),Dg(γ(T))),1δ DbΩ(γ(T))12μ1δ2 |DbΩ(γ(T))|2=12δ2μ.

    So,

    DpH(T,γ(T),p(T)),DbΩ(γ(T))12δμ|DpH(T,γ(T),Dg(γ(T)))|.

    Therefore, we obtain

    ˙γ(T),DbΩ(γ(T))=DpH(T,γ(T),p(T)),DbΩ(γ(T))12δμ+|DpH(T,γ(T),Dg(γ(T)))|.

    Thus, choosing δ as in (3.43) gives the result.

    Lemma 3.7. Fix δ as in (3.43). Then there exists ϵ1(0,ϵ0], such that for any ϵ(0,ϵ1]

    x¯Ω, γXϵ,δ[x]    γ(t)¯Ω   t[0,T].

    Proof. We argue by contradiction. Assume that there exist sequences {ϵk}, {tk}, {xk}, {γk} such that

    ϵk0, tk[0,T], xk¯Ω, γkXϵk,δ[xk] and γk(tk)¯Ω,   for all k1. (3.44)

    Then, for each k1 one could find an interval with end-points 0ak<bkT such that

    {dΩ(γk(ak))=0,dΩ(γk(t))>0   t(ak,bk),dΩ(γk(bk))=0  or else  bk=T.

    Let ¯tk(ak,bk] be such that

    dΩ(γk(¯tk))=maxt[ak,bk]dΩ(γk(t)).

    We note that, by Lemma 3.5, γk is of class C2 in a neighborhood of ˜tk.

    Step 1

    We claim that

    d2dt2dΩ(γk(t))|t=¯tk0. (3.45)

    Indeed, (3.45) is trivial if ¯tk(ak,bk). Suppose ¯tk=bk. Since ¯tk is a maximum point of the map tdΩ(γk(t)) and γk(¯tk)¯Ω, we have that dΩ(γk(¯tk))0. So, bk=T=¯tk and we get

    ddtdΩ(γk(t))|t=¯tk0.

    Moreover, Lemma 3.6 yields

    ddtdΩ(γk(t))|t=¯tk0.

    So,

    ddtdΩ(γk(t))|t=¯tk=0,

    and we have that (3.45) holds true at ¯tk=T.

    Step 2

    Now, we prove that

    1μϵkC(μ,M,κ)[1+4μC1δ2+4μϵk dΩ(γk(¯tk))],    k1, (3.46)

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK) and the constant C(μ,M,κ) depends only on μ, M and κ. Indeed, since γ is of class C2 in a neighborhood of ¯tk one has that

    ¨γ(¯tk)=D2ptH(˜tk,γ(˜tk),p(˜tk))D2pxH(˜tk,γ(˜tk),p(˜tk)),˙γ(˜tk)D2ppH(˜tk,γ(˜tk),p(˜tk)),˙p(˜tk). (3.47)

    Developing the second order derivative of dΩγ, by (3.47) and the expression of the derivatives of γ and p in Lemma 3.4 one has that

    0D2dΩ(γ(˜tk))˙γ(˜tk),˙γ(˜tk)+DdΩ(γ(˜tk)),¨γ(˜tk)=D2dΩ(γ(˜tk))DpH(˜tk,γ(˜tk),p(˜tk)),DpH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)),D2ptH(˜tk,γ(˜tk),p(˜tk))+DdΩ(γ(˜tk)),D2pxH(˜tk,γ(˜tk),p(˜tk))DpH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)),D2ppH(˜tk,γ(˜tk),p(˜tk))DxH(˜tk,γ(˜tk),p(˜tk))+1ϵDdΩ(γ(˜tk)),D2ppH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)).

    We now use the growth properties of H in (3.14), and (3.16)-(3.18), the lower bound for D2ppH in (3.13), and the regularity of the boundary of Ω to obtain:

    1μϵkC(μ,M)(1+|p(˜tk)|)2+κC(μ,M)(1+|p(˜tk)|)C(μ,M,κ)(1+|p(˜tk)|2),

    where the constant C(μ,M,κ) depends only on μ, M and κ. By our estimate for p in (3.29) we get:

    1μϵkC(μ,M,κ)[1+4μC1δ2+4μϵkdΩ(γ(˜tk))],   k1,

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK).

    Conclusion

    Let ρ=min{ρ0,132C(μ,M,κ)μ2}. Owing to Lemma 3.3, for all ϵ(0,ϵ(ρ)] we have that

    supt[0,T]dΩ(γ(t))ρ,    γXϵ,δ[x].

    Hence, using (3.46), we deduce that

    12μϵk4C(μ,M,κ)[1+4μC1δ2].

    Since the above inequality fails for k large enough, we conclude that (3.44) cannot hold true. So, γ(t) belongs to ¯Ω for all t[0,T].

    An obvious consequence of Lemma 3.7 is the following:

    Corollary 3.1. Fix δ as in (3.43) and take ϵ=ϵ1, where ϵ1 is defined as in Lemma 3.7. Then an arc γ() is a solution of problem (3.22) if and only if it is also a solution of (3.1).

    We are now ready to complete the proof of Theorem 3.1.

    Proof of Theorem 3.1. Let x¯Ω and γX[x]. By Corollary 3.1 we have that γ is a solution of problem (3.22) with δ as in (3.43) and ϵ=ϵ1 as in Lemma 3.7. Let p() be the associated adjoint map such that (γ(),p()) satisfies (3.26). Moreover, let λ() and β be defined as in Lemma 3.4. Define ν=βδ. Then we have 0ν1δ and, by (3.26),

    p(T)=Dg(γ(T))+ν DbΩ(γ(T)). (3.48)

    By Lemma 3.4 γC1,1([0,T];¯Ω) and

    ˙γ(t)=DpH(t,γ(t),p(t)),    t[0,T]. (3.49)

    Moreover, p()Lip(0,T;Rn) and by (3.29) one has that

    |p(t)|2μC1δ,   t[0,T],

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK). Hence, p is bounded. By (3.49), and by (3.17) one has that

    ||˙γ||=supt[0,T]|DpH(t,γ(t),p(t))|C(μ,M)(supt[0,T]|p(t)|+1)C(μ,M)(2μC1δ+1))=L,

    where L=L(μ,M,M,κ,T,||Dg||,||g||). Thus, (3.21) holds

    Finally, we want to find an explicit expression for λ(t). For this, we set

    D={t[0,T]:γ(t)Ω}andDρ0={t[0,T]:|bΩ(γ(t))|<ρ0},

    where ρ0 is as in assumption (2.1). Note that ψ(t):=bΩγ is of class C1,1 on the open set Dρ0, with

    ˙ψ(t)=DbΩ(γ(t)),˙γ(t)=DbΩ(γ(t)),DpH(t,γ(t),p(t)).

    Since pLip(0,T;Rn), ˙ψ is absolutely continuous on Dρ0 with

    ¨ψ(t)=D2bΩ(γ(t))˙γ(t),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ptH(t,γ(t),p(t))DbΩ(γ(t)),D2pxH(t,γ(t),p(t))˙γ(t)DbΩ(γ(t)),D2ppH(t,γ(t),p(t))˙p(t)=D2bΩ(γ(t))DpH(t,γ(t),p(t)),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ptH(t,γ(t),p(t))+DbΩ(γ(t)),D2pxH(t,γ(t),p(t))DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DxH(t,γ(t),p(t))+λ(t)ϵ DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t)).

    Let Nγ={tD(0,T)| ˙ψ(t)0}. Let tNγ, then there exists σ>0 such that γ(s)Ω for any s((tσ,t+σ){t})(0,T). Therefore, Nγ is composed of isolated points and so it is a discrete set. Hence, ˙ψ(t)=0 a.e. tD(0,T). So, ¨ψ(t)=0 a.e. in D, because ˙ψ is absolutely continuous. %¨ψ(t)=0 a.e. in D. Moreover, since D2ppH(t,x,p)>0 and |DbΩ(γ(t))|=1, we have that

    DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t))>0,a.e.tDρ0.

    So, for a.e. tD, λ(t) is given by

    λ(t)ϵ=1DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t)) [DbΩ(γ(t)),D2ptH(t,γ(t),p(t))D2bΩ(γ(t))DpH(t,γ(t),p(t)),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2pxH(t,γ(t),p(t))DpH(t,γ(t),p(t))+DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DxH(t,γ(t),p(t))].

    Since λ(t)=0 for all t[0,T]D by (3.27), taking Λ(t,γ(t),p(t))=λ(t)ϵ, we obtain the conclusion.

    Remark 3.4. The above proof gives a representation of Λ, i.e., for all (t,x,p)[0,T]×Σρ0×Rn one has that

    Λ(t,x,p)=1θ(t,x,p) [D2bΩ(x)DpH(t,x,p),DpH(t,x,p)DbΩ(x),D2ptH(t,x,p)DbΩ(x),D2pxH(t,x,p)DpH(t,x,p)+DbΩ(x),D2ppH(t,x,p)DxH(t,x,p)],

    where θ(t,x,p):=DbΩ(x),D2ppH(t,x,p)DbΩ(x). Observe that (3.13) ensures that θ(t,x,p)>0 for all t[0,T], for all xΣρ0 and for all pRn.

    We now want to remove the extra assumption U=Rn. For this purpose, it suffices to show that the data f and g---a priori defined just on U---can be extended to Rn preserving the conditions in (f0)-(f2) and (g1). So, we proceed to construct such an extension by taking a cut-off function ξC(R) such that

    {ξ(x)=0    if  x(,13],0<ξ(x)<1if  x(13,23),ξ=1if  x[23,+). (3.50)

    Lemma 3.8. Let ΩRn be a bounded open set with C2 boundary. Let U be a open subset of Rn such that ¯ΩU and set

    σ0=dist(¯Ω,RnU)>0.

    Suppose that f:[0,T]×U×RnR and g:UR satisfy (f0)-(f2) and (g1), respectively. Set σ=σ0ρ0. Then, the function f admits the extension

    ˜f(t,x,v)=ξ(bΩ(x)σ)|v|22+(1ξ(bΩ(x)σ))f(t,x,v),    (t,x,v)[0,T]×Rn×Rn,

    that satisfies conditions (f0)-(f2) with U=Rn. Moreover, g admits the extension

    ˜g(x)=(1ξ(bΩ(x)σ))g(x),    xRn,

    that satisfies condition (g1) with U=Rn.

    Note that, since Ω is bounded and U is open, the distance between ¯Ω and RnU is positive.

    Proof. By construction we note that ˜fC([0,T]×Rn×Rn). Moreover, for all t[0,T] the function (x,v)˜f(t,x,v) is differentiable and the map (x,v)Dv˜f(t,x,v) is continuously differentiable by construction. Furthermore, Dx˜f, Dv˜f are continuous on [0,T]×Rn×Rn and ˜f satisfies (3.2). In order to prove (3.3) for ˜f, we observe that

    Dv˜f(t,x,v)=ξ(bΩ(x)σ)v+(1ξ(bΩ(x)σ))Dvf(t,x,v),

    and

    D2vv˜f(t,x,v)=ξ(bΩ(x)σ)I+(1ξ(bΩ(x)σ))D2vvf(t,x,v).

    Hence, by the definition of ξ and (3.3) we obtain that

    (11μ)ID2vv˜f(t,x,v)(1μ)I,      (t,x,v)[0,T]×Rn×Rn.

    Since μ1, we have that ˜f verifies the estimate in (3.3).

    Moreover, since

    Dx(Dv˜f(t,x,v))=˙ξ(bΩ(x)σ)vDbΩ(x)σ+(1ξ(bΩ(x)σ))D2vxf(t,x,v)˙ξ(bΩ(x)σ)Dvf(t,x,v)DbΩ(x)σ,

    and by (3.4) we obtain that

    ||D2vx˜f(t,x,v)||C(μ,M)(1+|v|)   (t,x,v)[0,T]×Rn×Rn.

    For all (x,v)Rn×Rn the function t˜f(t,x,v) and the map tDv˜f(t,x,v) are Lipschitz continuous by construction. Moreover, by (3.5) and the definition of ξ one has that

    |˜f(t,x,v)˜f(s,x,v)|=|(1ξ(bΩ(x)σ))[f(t,x,v)f(s,x,v)]|κ(1+|v|2)|ts|

    for all t, s[0,T], xRn, vRn. Now, we have to prove that (3.6) holds for ˜f. Indeed, using (3.6) we deduce that

    |Dv˜f(t,x,v))Dv˜f(s,x,v))||(1ξ(bΩ(x)σ))[Dvf(t,x,v)Dvf(s,x,v))]|κ(1+|v|)|ts|,

    for all t, s[0,T], xRn, vRn. Therefore, ˜f verifies the assumptions (f0)-(f2).

    Finally, by the regularity of bΩ, ξ, and g we have that ˜g is of class C1b(Rn). This completes the proof.

    Suppose that f:[0,T]×U×RnR and g:UR satisfy the assumptions (f0)-(f2) and (g1), respectively. Let (t,x)[0,T]ׯΩ. Define u:[0,T]ׯΩR as the value function of the minimization problem (3.1), i.e.,

    u(t,x)=infγΓγ(t)=xTtf(s,γ(s),˙γ(s))ds+g(γ(T)). (4.1)

    Proposition 4.1. Let Ω be a bounded open subset of Rn with C2 boundary. Suppose that f and g satisfy (f0)-(f2) and (g1), respectively. Then, u is Lipschitz continuous in [0,T]ׯΩ.

    Proof. First, we shall prove that u(t,) is Lipschitz continuous on Ω, uniformly for t[0,T]. Since u(T,)=g, it suffices to consider the case of t[0,T). Let x0Ω and choose 0<r<1 such that Br(x0)B2r(x0)B4r(x0)Ω. To prove that u(t,) is Lipschitz continuous in Br(x0), take xy in Br(x0). Let γ be an optimal trajectory for u at (t,x) and let ˉγ be the trajectory defined by

    {ˉγ(t)=y,˙ˉγ(s)=˙γ(s)+xyτ  if s[t,t+τ]  a.e.,˙ˉγ(s)=˙γ(s)  otherwise,

    where τ=|xy|2L<Tt. We claim that

    (a) ˉγ(t+τ)=γ(t+τ);

    (b) ˉγ(s)=γ(s) for any s[t+τ,T];

    (c) |ˉγ(s)γ(s)||yx| for any s[t,t+τ];

    (d) ˉγ(s)¯Ω for any s[t,T].

    Indeed, by the definition of ˉγ we have that

    ˉγ(t+τ)ˉγ(t)=ˉγ(t+τ)y=t+τt(˙γ(s)+xyτ)ds=γ(t+τ)y,

    and this gives (a). Moreover, by (a), and by the definition of ˉγ one has that ˉγ(s)=γ(s) for any s[t+τ,T]. Hence, ˉγ verifies (b). By the definition of ˉγ, for any s[t,t+τ] we obtain that

    |ˉγ(s)γ(s)||yx+st(˙ˉγ(σ)˙γ(σ))dσ|=|yx+stxyτdσ||yx|

    and so (c) holds. Since γ is an optimal trajectory for u and by ˉγ(s)=γ(s) for all s[t+τ,T], we only have to prove that ˉγ(s) belongs to ¯Ω for all s[t,t+τ]. Let s[t,t+τ], by Theorem 3.1 one has that

    |ˉγ(s)x0||ˉγ(s)y|+|yx0||st˙ˉγ(σ)dσ|+rst |˙γ(σ)+xyτ|dσ+rst[|˙γ(σ)|+|xy|τ]dσ+rL(st)+|xy|τ(st)+rLτ+|xy|+r.

    Recalling that \tau = \frac{|x-y|}{2L^\star} one has that

    \begin{equation*} |{\bar \gamma}(s)-x_0|\leq \frac{|x-y|}{2}+|x-y|+r\leq 4r. \end{equation*}

    Therefore, {\bar \gamma}(s)\in B_{4r}(x_0)\subset \overline{\Omega} for all s\in[t, t+\tau].

    Now, owing to the dynamic programming principle, by (a) one has that

    \begin{equation} u(t, y)\leq \int_t^{t+\tau} f(s, {\bar \gamma}(s), \dot{{\bar \gamma}}(s))\, ds + u(t+\tau, \gamma(t+\tau)). \end{equation} (4.2)

    Since \gamma is an optimal trajectory for u at (t, x), we obtain that

    \begin{equation*} u(t, y)\leq u(t, x) +\int_t^{t+\tau} \Big[f(s, {\bar \gamma}(s), \dot{\bar \gamma}(s))-f(s, \gamma(s), \dot{\gamma}(s))\Big] \, ds. \end{equation*}

    By (3.7), (3.8), and the definition of {\bar \gamma}, for s\in [t, t+\tau] we have that

    \begin{align*} &|f(s, {\bar \gamma}(s), \dot{{\bar \gamma}}(s))-f(s, \gamma(s), \dot{\gamma}(s))|\\ &\leq|f(s, {\bar \gamma}(s), \dot{{\bar \gamma}}(s))-f(s, {\bar \gamma}(s), \dot{\gamma}(s))|+|f(s, {\bar \gamma}(s), \dot{\gamma}(s))-f(s, \gamma(s), \dot{\gamma}(s))|\\ &\leq \int_0^1 |\langle D_vf(s, {\bar \gamma}(s), \lambda\dot{{\bar \gamma}}(s)+(1-\lambda)\dot{\gamma}(s)), \dot{{\bar \gamma}}(s)-\dot{\gamma}(s)\rangle|\, d\lambda\\ & + \int_0^1|D_xf(s, \lambda{\bar \gamma}(s)+(1-\lambda)\gamma(s), \dot{\gamma}(s)), {\bar \gamma}(s)-\gamma(s)\rangle|\, d\lambda\\ &\leq C(\mu, M)|\dot{{\bar \gamma}}(s)-\dot{\gamma}(s)|\int_0^1 (1+|\lambda\dot{{\bar \gamma}}(s)+(1-\lambda)\dot{\gamma}(s)|)\, d\lambda \\ &+ C(\mu, M)|{\bar \gamma}(s)-\gamma(s)|\int_0^1(1+ |\dot{\gamma}(s)|^2)\, d\lambda. \end{align*}

    By Theorem 3.1 one has that

    \int_0^1 (1+|\lambda\dot{{\bar \gamma}}(s)+(1-\lambda)\dot{\gamma}(s)|)\, d\lambda\leq 1+4L^\star, (4.3)
    \int_0^1(1+ |\dot{\gamma}(s)|^2)\, d\lambda\leq 1+(L^\star)^2. (4.4)

    Using (4.3), (4.4), and (c), by the definition of \overline{\gamma} one has that

    \begin{equation}\label{bl1} |f(s, {\bar \gamma}(s), \dot{{\bar \gamma}}(s))-f(s, \gamma(s), \dot{\gamma}(s))|\leq C(\mu, M)(1+4L^\star)\frac{|x-y|}{\tau}+C(\mu, M)(1+(L^\star)^2)|x-y|, \end{equation} (4.5)

    for a.e. s\in[t, t+\tau]. By (4.5), and the choice of \tau we deduce that

    \begin{align*} &u(t, y)\leq u(t, x) + C(\mu, M)(1+4L^\star)\int_t^{t+\tau}\frac{|x-y|}{\tau} \, ds+ C(\mu, M)(1+(L^\star)^2)\int_t^{t+\tau} |x-y|\, ds\\ &\leq u(t, x) + C(\mu, M)(1+4L^\star)\big|x-y\big|+\tau C(\mu, M)(1+(L^\star)^2)\big|x-y\big|\leq u(t, x)+C_{L^\star}|x-y| \end{align*}

    where C_{L^\star} = C(\mu, M)(1+4L^\star)+\frac{1}{2L^\star}C(\mu, M)(1+(L^\star)^2). Thus, u is locally Lipschitz continuous in space and one has that ||Du||_\infty\leq \vartheta, where \vartheta is a constant not depending on \Omega. Owing to the smoothness of \Omega, u is globally Lipschitz continuous in space, uniformly for t\in[0, T].

    In order to prove Lipschitz continuity in time, let x \in \overline\Omega and fix t_1, t_2 \in [0, T] with t_2\geq t_1. Let \gamma be an optimal trajectory for u at (t_1, x). Then,

    \begin{equation}\label{3e} |u(t_2, x)-u(t_1, x)|\leq |u(t_2, x)-u(t_2, \gamma(t_2))|+|u(t_2, \gamma(t_2))-u(t_1, x)|. \end{equation} (4.6)

    The first term on the right-side of (4.6) can be estimated using the Lipschitz continuity in space of u and Theorem 3.1. Thus, we get

    \begin{equation}\label{4} |u(t_2, x)-u(t_2, \gamma(t_2))|\leq C_{L^\star}|x-\gamma(t_2)| \leq C_{L^\star}\int_{t_1}^{t_2}|\dot{\gamma}(s)|\, ds\leq L^\star C_{L^\star} (t_2-t_1). \end{equation} (4.7)

    We only have to estimate the second term on the right-side of (4.6). By the dynamic programming principle, (3.9), and the assumptions on F we deduce that

    \begin{align}\label{5} |u(t_2, \gamma(t_2))-u(t_1, x)|& = \Big |\int_{t_1}^{t_2}f(s, \gamma(s), \dot{\gamma}(s))\, ds\Big|\leq \int_{t_1}^{t_2}|f(s, \gamma(s), \dot{\gamma}(s))|\, ds\\ &\leq \int_{t_1}^{t_2} \Big[C(\mu, M)+ 4\mu |\dot{\gamma}(s)|^2\Big]\, ds\leq \Big[C(\mu, M)+4\mu L^\star\Big] (t_2-t_1)\nonumber \end{align} (4.8)

    Using (4.7) and (4.8) to bound the right-hand side of (4.6), we obtain that u is Lipschitz continuous in time. This completes the proof.

    In this section we want to apply Theorem 3.1 to a mean field game (MFG) problem with state constraints. Such a problem was studied in [11], where the existence and uniqueness of constrained equilibria was obtained under fairly general assumptions on the data. Here, we will apply our necessary conditions to deduce the existence of more regular equilibria than those constructed in [11], assuming the data F and G to be Lipschitz continuous.

    Assumptions

    Let \Omega be a bounded open subset of \mathbb{R}^n with C^2 boundary. Let \mathcal{P}(\overline{\Omega}) be the set of all Borel probability measures on \overline\Omega endowed with the Kantorovich-Rubinstein distance d_1 defined in (2.2). Let U be an open subset of \mathbb{R}^n and such that \overline{\Omega}\subset U. Assume that F:U\times\mathcal{P}(\overline{\Omega})\rightarrow \mathbb{R} and G:U\times \mathcal{P}(\overline{\Omega})\rightarrow \mathbb{R} satisfy the following hypotheses.

    (D1) For all x\in U, the functions m\mapsto F(x, m) and m\mapsto G(x, m) are Lipschitz continuous, i.e., there exists a constant \kappa\geq 0 such that

    \begin{align} |F(x, m_1)-F(x, m_2)|+ |G(x, m_1)-G(x, m_2)| \leq \kappa d_1(m_1, m_2), \label{lf} \end{align} (4.9)

    for any m_1, m_2 \in\mathcal{P}(\overline{\Omega}).

    (D2) For all m\in \mathcal{P}(\overline{\Omega}), the functions x\mapsto G(x, m) and x\mapsto F(x, m) belong to C^1_b(U). Moreover

    \begin{equation*} |D_xF(x, m)|+|D_xG(x, m)|\leq \kappa, \ \ \ \ \forall \ x\in U, \ \forall \ m\in \mathcal{P}(\overline{\Omega}). \end{equation*}

    Let L:U\times\mathbb{R}^n\rightarrow \mathbb{R} be a function that satisfies the following assumptions.

    (L0) L\in C^1(U\times \mathbb{R}^n) and there exists a constant M\geq 0 such that

    \begin{equation}\label{bml} |L(x, 0)|+|D_xL(x, 0)|+|D_vL(x, 0)|\leq M, \ \ \ \ \forall \ x\in U. \end{equation} (4.10)

    (L1) D_vL is differentiable on U\times\mathbb{R}^n and there exists a constant \mu\geq 1 such that

    \frac{I}{\mu} \leq D^2_{vv}L(x, v)\leq I\mu, (4.11)
    ||D_{vx}^2L(x, v)||\leq \mu(1+|v|), (4.12)

    for all (x, v)\in U\times \mathbb{R}^n.

    Remark 4.1. (ⅰ) F, G and L are assumed to be defined on U\times \mathcal{P}(\overline{\Omega}) and on U\times \mathbb{R}^n, respectively, just for simplicity. All the results of this section hold true if we replace U by \overline{\Omega}. This fact can be easily checked by using well-known extension techniques (see, e.g. [1, Theorem 4.26]).

    (ⅱ) Arguing as Lemma 3.1 we deduce that there exists a positive constant C(\mu, M) that dependes only on M, \mu such that

    |D_xL(x, v)|\leq C(\mu, M)(1+|v|^2), (4.13)
    |D_vL(x, v)|\leq C(\mu, M)(1+|v|), (4.14)
    \frac{|v|^2}{4\mu}-C(\mu, M) \leq L(x, v)\leq 4\mu|v|^2 +C(\mu, M), (4.15)

    for all (x, v)\in U\times\mathbb{R}^n.

    Let m\in {\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})). If we set f(t, x, v) = L(x, v)+F(x, m(t)), then the associated Hamiltonian H takes the form

    H(t, x, p) = H_L(x, p)-F(x, m(t)), \ \ \ \forall\ (t, x, p)\in[0, T]\times U\times\mathbb{R}^n,

    where

    \begin{equation*} H_L(x, p) = \sup\limits_{v\in\mathbb{R}^n}\Big\{-\langle p, v\rangle-L(x, v)\Big\}, \ \ \ \ \ \forall\ (x, p)\in U\times\mathbb{R}^n. \end{equation*}

    The assumptions on L imply that H_L satisfies the following conditions.

    1. H_L\in C^1(U\times \mathbb{R}^n) and there exists a constant M'\geq 0 such that

    \begin{equation} |H_L(x, 0)|+|D_xH_L(x, 0)|+|D_pH_L(x, 0)|\leq M', \ \ \ \ \forall x\in U. \end{equation} (4.16)

    2. D_pH_L is differentiable on U\times\mathbb{R}^n and satisfies

    \frac{I}{\mu}\leq D_{pp}H_L(x, p)\leq I\mu, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \forall \ (x, p)\in U\times\mathbb{R}^n, (4.17)
    ||D_{px}^2 H_L(x, p)||\leq C(\mu, M')(1+|p|), \ \ \ \forall \ (x, p)\in U\times \mathbb{R}^n, (4.18)

    where \mu is the constant in (L1) and C(\mu, M') depends only on \mu and M'.

    For any t\in [0, T], we denote by e_t:\Gamma\to \overline \Omega the evaluation map defined by

    \begin{equation*} e_t(\gamma) = \gamma(t), \ \ \ \ \forall \gamma\in\Gamma. \end{equation*}

    For any \eta\in\mathcal{P}(\Gamma), we define

    \begin{equation*} m^\eta(t) = e_t\sharp\eta \ \ \ \ \forall t\in [0, T]. \end{equation*}

    Remark 4.2. We observe that for any \eta\in\mathcal{P}(\Gamma), the following holds true (see [11] for a proof).

    (ⅰ) m^\eta\in C([0, T];\mathcal{P}(\overline{\Omega})).

    (ⅱ) Let \eta_i, \eta\in\mathcal{P}(\Gamma), i\geq 1, be such that \eta_i is narrowly convergent to \eta. Then m^{\eta_i}(t) is narrowly convergent to m^\eta(t) for all t\in[0, T].

    For any fixed m_0\in\mathcal{P}(\overline{\Omega}), we denote by {\mathcal P}_{m_0}(\Gamma) the set of all Borel probability measures \eta on \Gamma such that e_0\sharp \eta = m_0. For all \eta \in \mathcal{P}_{m_0}(\Gamma), we set

    \begin{equation*} J_\eta [\gamma] = \int_0^T \Big[L(\gamma(t), \dot \gamma(t))+ F(\gamma(t), m^\eta(t))\Big]\ dt + G(\gamma(T), m^\eta(T)), \ \ \ \ \ \forall \gamma\in\Gamma. \end{equation*}

    For all x \in \overline{\Omega} and \eta\in\mathcal{P}_{m_0}(\Gamma), we define

    \begin{equation*} \Gamma^\eta[x] = \Big\{ \gamma\in\Gamma[x]:J_\eta[\gamma] = \min\limits_{\Gamma[x]} J_\eta\Big\}. \end{equation*}

    It is shown in [11] that, for every \eta\in \mathcal{P}_{m_0}(\Gamma), the set \Gamma^\eta[x] is nonempty and \Gamma^\eta[\cdot] has closed graph. We recall the definition of constrained MFG equilibria for m_0, given in [11].

    Definition 4.1. Let m_0\in\mathcal{P}(\overline{\Omega}). We say that \eta\in\mathcal{P}_{m_0}(\Gamma) is a contrained MFG equilibrium for m_0 if

    \begin{equation*} supp(\eta)\subseteq \bigcup\limits_{x\in\overline{\Omega}} \Gamma^\eta[x]. \end{equation*}

    Let \Gamma' be a nonempty subset of \Gamma. We denote by \mathcal{P}_{m_0}(\Gamma') the set of all Borel probability measures \eta on \Gamma' such that e_0\sharp\eta = m_0. We now introduce special subfamilies of \mathcal{P}_{m_0}(\Gamma) that play a key role in what follows.

    Definition 4.2. Let \Gamma' be a nonempty subset of \Gamma. We define by \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma') the set of \eta\in\mathcal{P}_{m_0}(\Gamma') such that m^\eta(t) = e_t\sharp \eta is Lipschitz continuous, i.e.,

    \begin{equation*} \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma') = \{\eta\in\mathcal{P}_{m_0}(\Gamma'): m^\eta \in {\rm Lip}(0, T;\mathcal{P}(\overline{\Omega}))\}. \end{equation*}

    Remark 4.3. We note that \mathcal{P}^{{\rm Lip}}_{m_0}(\Gamma) is a nonempty convex set. Indeed, let j:\overline{\Omega}\rightarrow \Gamma be the continuous map defined by

    j(x)(t) = x \ \ \ \ \forall t \in[0, T].

    Then,

    \eta : = j\sharp m_0

    is a Borel probability measure on \Gamma and \eta \in\mathcal{P}^{{\rm Lip}}_{m_0}(\Gamma).

    In order to show that {\cal P}_{{m_0}}^{{\rm{Lip}}}\left( \Gamma \right) is convex, let \{\eta_i\}_{i = 1, 2}\subset \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma) and let \lambda_1, \lambda_2\geq 0 be such that \lambda_1+\lambda_2 = 1. Since \eta_i are Borel probability measures, \eta: = \lambda\eta_1+(1-\lambda)\eta_2 is a Borel probability measure as well. Moreover, for any Borel set B\in \mathscr{B}(\overline{\Omega}) we have that

    \begin{equation*} e_0 \sharp \eta (B) = \eta (e_0^{-1}(B)) = \sum\limits_{i = 1}^{2} \lambda_i \eta_i(e_0^{-1}(B)) = \sum\limits_{i = 1}^{2} \lambda_i e_0 \sharp \eta_i(B) = \sum\limits_{i = 1}^{2} \lambda_i m_0(B) = m_0 (B). \end{equation*}

    So, \eta\in\mathcal{P}_{m_0}(\Gamma). Since m^{\eta_1}, m^{\eta_2}\in {\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})), we have that m^\eta(t) = \lambda_1m^{\eta_1}(t)+\lambda_2m^{\eta_2}(t) belongs to {\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})).

    In the next result, we apply Theorem 3.1 to prove a useful property of minimizers of J_\eta.

    Proposition 4.2. Let \Omega be a bounded open subset of \mathbb{R}^n with C^2 boundary and let m_0\in\mathcal{P}(\overline{\Omega}). Suppose that (L0), (L1), (D1), and (D2) hold true. Let \eta\in\mathcal{P}^{{\rm Lip}}_{m_0}(\Gamma) and fix x\in\overline{\Omega}. Then \Gamma^\eta[x]\subset C^{1, 1}([0, T];\mathbb{R}^n) and

    \begin{equation}\label{l0} ||\dot{\gamma}||_\infty\leq L_0, \ \ \ \forall \gamma\in \Gamma^\eta[x], \end{equation} (4.19)

    where L_0 = L_0(\mu, M', M, \kappa, T, ||G||_\infty, ||DG||_\infty).

    Proof. Let \eta\in\mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma), x\in\overline{\Omega} and \gamma\in \Gamma^\eta[x]. Since m\in {\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})), taking f(t, x, v) = L(x, v)+F(x, m(t)), one can easly check that all the assumptions of Theorem 3.1 are satisfied by f and G. Therefore, we have that \Gamma^\eta[x]\subset C^{1, 1}([0, T];\mathbb{R}^n) and, in this case, (3.21) becomes

    \begin{equation*} ||\dot{\gamma}||_\infty\leq L_0, \ \ \ \forall \gamma\in \Gamma^\eta[x], \end{equation*}

    where L_0 = L_0(\mu, M', M, \kappa, T, ||G||_\infty, ||DG||_\infty).

    We denote by \Gamma_{L_0} the set of \gamma\in\Gamma such that (4.19) holds, i.e.,

    \begin{equation}\label{tgamma} \Gamma_{L_0} = \{\gamma \in \Gamma:||\dot\gamma||_\infty\leq L_0\}. \end{equation} (4.20)

    Lemma 4.1. Let m_0\in \mathcal{P}(\overline{\Omega}). Then, \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) is a nonempty convex compact subset of \mathcal{P}_{m_0}(\Gamma). Moreover, for every \eta\in\mathcal{P}_{m_0}(\Gamma_{L_0}), m^\eta(t): = e_t\sharp \eta is Lipschitz continuous of constant L_0, where L_0 is as in Proposition 4.2.

    Proof. Arguing as in Remark 4.3, we obtain that \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) is a nonempty convex set. Moreover, since \Gamma_{L_0} is compactly embedded in \Gamma, one has that \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) is compact.

    Let \eta\in\mathcal{P}_{m_0}(\Gamma_{L_0}) and m^\eta(t) = e_t\sharp\eta. For any t_1, t_2\in[0, T], we recall that

    \begin{equation*} d_1(m^\eta(t_2), m^\eta(t_1)) = \sup\Big\{\int_{\overline{\Omega}} \phi(x)(m^\eta(t_2, \, dx)-m^\eta(t_1, \, dx))\ \Big|\ \phi:\overline{\Omega}\rightarrow\mathbb{R}\ \ \mbox{is 1-Lipschitz} \Big\}. \end{equation*}

    Since \phi is 1-Lipschitz continuous, one has that

    \begin{align*} &\int_{\overline\Omega} \phi(x)\, (m^\eta(t_2, dx)-m^\eta(t_1, dx)) = \int_{\Gamma}\Big[ \phi(e_{t_2}(\gamma))-\phi(e_{t_1}(\gamma))\Big] \, d\eta(\gamma)\\ & = \int_{\Gamma} \Big[\phi(\gamma(t_2))-\phi(\gamma(t_1))\Big] \, d\eta(\gamma) \leq \int_{\Gamma} |\gamma(t_2)-\gamma(t_1)|\, d\eta(\gamma). \end{align*}

    Since \eta \in \mathcal{P}_{m_0}(\Gamma_{L_0}), we deduce that

    \begin{align*} \int_{\Gamma} |\gamma(t_2)-\gamma(t_1)|\, d\eta(\gamma)\leq L_0\int_{\Gamma} |t_2-t_1|\, d\eta(\gamma) = L_0|t_2-t_1| \end{align*}

    and so m^\eta(t) is Lipschitz continuous of constant L_0.

    In the next result, we deduce the existence of more regular equilibria than those constructed in [11].

    Theorem 4.1. Let \Omega be a bounded open subset of \mathbb{R}^n with C^2 boundary and m_0\in\mathcal{P}(\overline{\Omega}). Suppose that (L0), (L1), (D1), and (D2) hold true. Then, there exists at least one constrained MFG equilibrium \eta \in{\cal P}_{{m_0}}^{{\rm{Lip}}}\left( \Gamma \right).

    Proof. First of all, we recall that for any \eta\in{\cal P}_{{m_0}}^{{\rm{Lip}}}\left( \Gamma \right), there exists a unique Borel measurable family * of probabilities \{\eta_x\}_{x\in\overline{\Omega}} on \Gamma which disintegrates \eta in the sense that

    *We say that \{\eta_x\}_{x\in \overline{\Omega}} is a Borel family (of probability measures) if x\in \overline{\Omega}\mapsto \eta_x(B)\in \mathbb{R} is Borel for any Borel set B\subset \Gamma.

    \begin{equation}\label{dise} \begin{cases} \eta(d\gamma) = \int_{\overline{\Omega}} \eta_x(d\gamma) m_0(\, dx), \\ supp(\eta_x)\subset \Gamma[x] \ \ m_0-\mbox{a.e.} \ x\in \overline{\Omega} \end{cases} \end{equation} (4.21)

    (see, e.g., [2, Theorem 5.3.1]). Proceeding as in [11], we introduce the set-valued map

    E:\mathcal{P}_{m_0}(\Gamma)\rightrightarrows \mathcal{P}_{m_0}(\Gamma),

    by defining, for any \eta\in \mathcal{P}_{m_0}(\Gamma),

    \begin{equation}\label{ein} E(\eta) = \Big\{ \widehat{\eta}\in\mathcal{P}_{m_0}(\Gamma): supp(\widehat{\eta}_x)\subseteq \Gamma^\eta[x] \ \ m_0-\mbox{a.e.} \ x \in \overline{\Omega}\Big\}. \end{equation} (4.22)

    We recall that, by [11, Lemma 3.6], the map E has closed graph.

    Now, we consider the restriction E_0 of E to {\cal P}_{{m_0}}^{{\rm{Lip}}}\left( \Gamma \right), i.e.,

    E_0:\mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) \rightrightarrows \mathcal{P}_{m_0}(\Gamma), \ \ \ E_0(\eta) = E(\eta) \ \ \forall \eta \in\mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}).

    We will show that the set-valued map E_0 has a fixed point, i.e., there exists \eta\in \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) such that \eta\in E_0(\eta). By [11, Lemma 3.5] we have that for any \eta\in\mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}), E_0(\eta) is a nonempty convex set. Moreover, we have that

    \begin{equation}\label{lin} E_0(\mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}))\subseteq \mathcal{P}^{{\rm Lip}}_{m_0}(\Gamma_{L_0}). \end{equation} (4.23)

    Indeed, let \eta\in \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}) and \hat{\eta}\in E_0(\eta). Since, by Proposition 4.2 one has that

    \Gamma^\eta[x]\subset \Gamma_{L_0} \ \ \ \forall x \in \overline{\Omega},

    and by definition of E_0 we deduce that

    supp(\widehat{\eta})\subset \Gamma_{L_0}.

    So, \widehat{\eta}\in\mathcal{P}_{m_0}(\Gamma_{L_0}). By Lemma 4.1, \widehat{\eta}\in \mathcal{P}_{m_0}^{{\rm Lip}}(\Gamma_{L_0}).

    Since E has closed graph, by Lemma 4.1 and (4.23) we have that E_0 has closed graph as well. Then, the assumptions of Kakutani's Theorem [30] are satisfied and so, there exists \overline \eta\in \mathcal{P}^{{\rm Lip}}_{m_0}(\Gamma_{L_0}) such that \overline \eta\in E_0(\overline \eta).

    We recall the definition of a mild solution of the constrained MFG problem, given in [11].

    Definition 4.3. We say that (u, m)\in C([0, T]\times \overline{\Omega})\times C([0, T];\mathcal{P}(\overline{\Omega})) is a mild solution of the constrained MFG problem in \overline{\Omega} if there exists a constrained MFG equilibrium \eta\in\mathcal{P}_{m_0}(\Gamma) such that

    (i) m(t) = e_t\sharp \eta for all t\in[0, T];

    (ii) u is given by

    \begin{equation}\label{v} u(t, x) = \inf\limits_{\tiny\begin{array}{c} \gamma\in \Gamma\\ \gamma(t) = x \end{array}} \left\{\int_t^T \left[L(\gamma(s), \dot \gamma(s))+ F(\gamma(s), m(s))\right]\ ds + G(\gamma(T), m(T))\right\}, \end{equation} (4.24)

    for (t, x)\in [0, T]\times \overline{\Omega}.

    Theorem 4.2. Let \Omega be a bounded open subset of \mathbb{R}^n with C^2 boundary. Suppose that (L0), (L1), (D1) and (D2) hold true. There exists at least one mild solution (u, m) of the constrained MFG problem in \overline{\Omega}. Moreover,

    (i) u is Lipschitz continuous in [0, T]\times\overline{\Omega};

    (ii) m\in{\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})) and {\rm Lip}(m) = L_0, where L_0 is the constant in (4.19).

    The question of the Lipschitz continuity up to the boundary of the value function under state constraints was addressed in [28] and [34], for stationary problems, and in a very large literature that has been published since. We refer to the survey paper [20] for references.

    Proof. Let m_0\in\mathcal{P}(\overline{\Omega}) and let \eta\in \mathcal{P}_{m_0}^{\rm Lip}(\Gamma) be a constrained MFG equilibrium for m_0. Then, by Theorem 4.1 there exists at least one mild solution (u, m) of the constrained MFG problem in \overline{\Omega}. Moreover, by Theorem 4.1 one has that m\in{\rm Lip}(0, T;\mathcal{P}(\overline{\Omega})) and {\rm Lip}(m) = L_0, where L_0 is the constant in (4.19). Finally, by Proposition 4.1 we conclude that u is Lipschitz continuous in (0, T)\times \overline{\Omega}.

    Remark 4.4. Recall that F:U\times \mathcal{P}(\overline{\Omega})\rightarrow \mathbb{R} is strictly monotone if

    \int_{\overline{\Omega}} (F(x, m_1)-F(x, m_2))d(m_1-m_2)(x)\ \geq\ 0, (4.25)

    for any m_1, m_2\in {\mathcal P}(\overline \Omega), and \int_{\overline{\Omega}} (F(x, m_1)-F(x, m_2))d(m_1-m_2)(x) = 0 if and only if F(x, m_1) = F(x, m_2) for all x\in \overline{\Omega}.

    Suppose that F and G satisfy (4.25). Let \eta_1, \eta_2\in \mathcal{P}_{m_0}^{\rm Lip}(\Gamma) be constrained MFG equilibria and let J_{\eta_1} and J_{\eta_2} be the associated functionals, respectively. Then J_{\eta_1} is equal to J_{\eta_2}. Consequently, if (u_1, m_1), (u_2, m_2) are mild solutions of the constrained MFG problem in \overline{\Omega}, then u_1 = u_2 (see [11] for a proof).

    In this Appendix we prove Lemma 2.1. The only case which needs to be analyzed is when x\in\partial\Omega. We recall that p\in \partial^p d_\Omega( x) if and only if there exists \epsilon>0 such that

    d_\Omega( y)-d_\Omega( x) -\langle p, y- x\rangle \geq C| y- x|^2, \ \ \text{for any} \ y \ \text{such that}\ | y- x|\leq \epsilon, (5.1)

    for some constant C\geq 0. Let us show that \partial^p d_\Omega( x) = D{b_\Omega}( x)[0, 1]. By the regularity of {b_\Omega}, one has that

    \begin{equation*} d_\Omega( y)-d_\Omega( x)-\langle D{b_\Omega}( x), y- x\rangle\geq {b_\Omega}( y)-{b_\Omega}( x)-\langle D{b_\Omega}( x), y- x\rangle \geq C | y- x|^2. \end{equation*}

    This shows that D{b_\Omega}( x)\in \partial^p d_\Omega( x). Moreover, since

    \begin{equation*} d_\Omega( y)-d_\Omega( x)-\langle \lambda D {b_\Omega}( x), y- x\rangle\geq \lambda\left( d_\Omega( y)-d_\Omega( x)-\langle D {b_\Omega}( x), y- x\rangle\right) \ \ \ \forall \lambda \in[0, 1], \end{equation*}

    we further obtain the inclusion

    \begin{equation*} D{b_\Omega}( x)[0, 1]\subset\partial d_\Omega( x). \end{equation*}

    Next, in order to show the reverse inclusion, let p\in\partial^p d_\Omega( x)\setminus\{0\} and let y\in\Omega^c. Then, we can rewrite (5.1) as

    {b_\Omega}( y)-{b_\Omega}( x) -\langle p, y- x\rangle \geq C| y- x|^2, \ \ \ | y- x|\leq \epsilon. (5.2)

    Since y\in \Omega^c, by the regularity of {b_\Omega} one has that

    \begin{equation}\label{p2} {b_\Omega}( y)-{b_\Omega}( x)\leq\langle D{b_\Omega}( x), y- x\rangle +C| y- x|^2 \end{equation} (5.3)

    for some constant C\in\mathbb{R}. By (5.2) and (5.3) one has that

    \begin{equation*} \left\langle D{b_\Omega}( x)-p, \frac{ y- x}{| y- x|}\right\rangle\geq C| y- x|. \end{equation*}

    Hence, passing to the limit for y\rightarrow x, we have that

    \begin{equation*} \langle D{b_\Omega}( x)-p, v\rangle \geq 0, \ \ \ \ \forall v\in T_{\Omega^c}( x), \end{equation*}

    where T_{\Omega^c}( x) is the contingent cone to \Omega^c at x (see e.g. [35] for a definition). Therefore, by the regularity of \partial\Omega,

    D{b_\Omega}( x)-p = \lambda v( x),

    where \lambda\geq 0 and v( x) is the exterior unit normal vector to \partial\Omega in x. Since v( x) = D{b_\Omega}( x), we have that

    p = (1-\lambda) D{b_\Omega}( x).

    Now, we prove that \lambda \leq 1. Suppose that y\in \Omega, then, by (5.1) one has that

    \begin{equation*} 0 = d_\Omega( y)\geq (1-\lambda)\langle D {b_\Omega}( x), y- x\rangle + C| y- x|^2. \end{equation*}

    Hence,

    \begin{equation*} (1-\lambda)\left\langle D{b_\Omega}( x), \frac{ y- x}{| y- x|}\right\rangle\leq -C| y- x|. \end{equation*}

    Passing to the limit for y\rightarrow x, we obtain

    \begin{equation*} (1-\lambda)\left\langle D {b_\Omega}( x), w\right\rangle \leq 0, \ \ \ \ \ \forall w\in T_{\overline{\Omega}}( x), \end{equation*}

    where T_{\overline{\Omega}}( x) is the contingent cone to \Omega at x. We now claim that \lambda\leq 1. If \lambda >1, then \langle D {b_\Omega}( x), w \rangle \geq 0 for all w\in T_{\overline{\Omega}}( x) but this is impossible since D{b_\Omega}( x) is the exterior unit normal vector to \partial\Omega in x. Using the regularity of {b_\Omega}, simple limit-taking procedures permit us to prove that \partial d_\Omega( x) = D{b_\Omega}( x)[0, 1] when x\in\partial \Omega. This completes the proof of Lemma 2.1.

    This work was partly supported by the University of Rome Tor Vergata (Consolidate the Foundations 2015) and by the Istituto Nazionale di Alta Matematica "F. Severi" (GNAMPA 2016 Research Projects). The authors acknowledge the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006. The second author is grateful to the Universitá Italo Francese (Vinci Project 2015).

    The authors declare no conflict of interest.



    [1] Adams RA (1975) Sobolev Spaces. Academic Press, New York.
    [2] Ambrosio L, Gigli N, Savare G (2008) Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag.
    [3] Arutyanov AV, Aseev SM (1997) Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints. SIAM J Control Optim 35: 930–952. doi: 10.1137/S036301299426996X
    [4] Benamou JD, Brenier Y (2000) A computational fluid mechanics solution to the Monge- Kantorovich mass transfer problem. Numer Math 84: 375–393. doi: 10.1007/s002110050002
    [5] Benamou JD, Carlier G (2015) Augmented Lagrangian Methods for Transport Optimization, Mean Field Games and Degenerate Elliptic Equations. J Optimiz Theory App 167: 1–26. doi: 10.1007/s10957-015-0725-9
    [6] Benamou JD, Carlier G, Santambrogio F (2017) Variational Mean Field Games, In: Bellomo N, Degond P, Tadmor E (eds) Active Particles, Modeling and Simulation in Science, Engineering and Technology, Birkhäuser, 1: 141–171.
    [7] Brenier Y (1999) Minimal geodesics on groups of volume-preserving maps and generalized solutions of the Euler equations. Comm Pure Appl Math 52: 411–452. doi: 10.1002/(SICI)1097-0312(199904)52:4<411::AID-CPA1>3.0.CO;2-3
    [8] Bettiol P, Frankowska H (2007) Normality of the maximum principle for nonconvex constrained bolza problems. J Differ Equations 243: 256–269. doi: 10.1016/j.jde.2007.05.005
    [9] Bettiol P, Frankowska H (2008) Hölder continuity of adjoint states and optimal controls for state constrained problems. Appl Math Opt 57: 125–147. doi: 10.1007/s00245-007-9015-8
    [10] Bettiol P, Khalil N and Vinter RB (2016) Normality of generalized euler-lagrange conditions for state constrained optimal control problems. J Convex Anal 23: 291–311.
    [11] Cannarsa P, Capuani R (2017) Existence and uniqueness for Mean Field Games with state constraints. Available from: http://arxiv.org/abs/1711.01063.
    [12] Cannarsa P, Castelpietra M and Cardaliaguet P (2008) Regularity properties of a attainable sets under state constraints. Series on Advances in Mathematics for Applied Sciences 76: 120–135. doi: 10.1142/9789812776075_0006
    [13] Cardaliaguet P (2015)Weak solutions for first order mean field games with local coupling. Analysis and geometry in control theory and its applications 11: 111–158.
    [14] Cardaliaguet P, Mészáros AR, Santambrogio F (2016) First order mean field games with density constraints: pressure equals price. SIAM J Control Optim 54: 2672–2709. doi: 10.1137/15M1029849
    [15] Cesari L (1983) Optimization–Theory and Applications: Problems with Ordinary Differential Equations, Vol 17, Springer-Verlag, New York.
    [16] Clarke FH (1983) Optimization and Nonsmooth Analisis, John Wiley & Sons, New York.
    [17] Dubovitskii AY and Milyutin AA (1964) Extremum problems with certain constraints. Dokl Akad Nauk SSSR 149: 759–762.
    [18] Frankowska H (2006) Regularity of minimizers and of adjoint states in optimal control under state constraints. J Convex Anal 13: 299.
    [19] Frankowska H (2009) Normality of the maximum principle for absolutely continuous solutions to Bolza problems under state constraints. Control Cybern 38: 1327–1340.
    [20] Frankowska H (2010) Optimal control under state constraints. Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II–IV: Invited Lectures, 2915–2942.
    [21] Galbraith GN and Vinter RB (2003) Lipschitz continuity of optimal controls for state constrained problems. SIAM J Control Optim 42: 1727–1744. doi: 10.1137/S0363012902404711
    [22] Hager WW(1979) Lipschitz continuity for constrained processes. SIAM J Control Optim 17: 321– 338.
    [23] Huang M, Caines PE and Malhamé RP (2007) Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized \epsilon - Nash equilibria. IEEE T Automat Contr 52: 1560–1571. doi: 10.1109/TAC.2007.904450
    [24] Huang M, Malhamé RP, Caines PE (2006) Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainly equivalence principle. Communication in information and systems 6: 221–252. doi: 10.4310/CIS.2006.v6.n3.a5
    [25] Lasry JM, Lions PL (2006) Jeux à champ moyen. I – Le cas stationnaire. CR Math 343: 619–625.
    [26] Lasry JM, Lions PL (2006) Jeux à champ moyen. II – Horizon fini et contrôle optimal. CR Math 343: 679–684.
    [27] Lasry JM, Lions PL (2007) Mean field games. Jpn J Math 2: 229–260. doi: 10.1007/s11537-007-0657-8
    [28] Lions PL (1985) Optimal control and viscosity solutions. Recent mathematical methods in dynamic programming, Springer, Berlin, Heidelberg, 94–112.
    [29] Loewen P, Rockafellar RT (1991) The adjoint arc in nonsmooth optimization. T Am Math Soc 325: 39–72. doi: 10.1090/S0002-9947-1991-1036004-7
    [30] Kakutani S (1941) A generalization of Brouwer's fixed point theorem. Duke Math J 8: 457–459. doi: 10.1215/S0012-7094-41-00838-4
    [31] Malanowski K (1978) On regularity of solutions to optimal control problems for systems with control appearing linearly. Archiwum Automatyki i Telemechaniki 23: 227–242.
    [32] Milyutin AA (2000) On a certain family of optimal control problems with phase constraint. Journal of Mathematical Sciences 100: 2564–2571. doi: 10.1007/BF02673842
    [33] Rampazzo F, Vinter RB (2000) Degenerate optimal control problems with state constraints. SIAM J Control Optim 39: 989–1007. doi: 10.1137/S0363012998340223
    [34] Soner HM (1986) Optimal control with state-space constraint I. SIAM J Control Optim 24: 552– 561. doi: 10.1137/0324032
    [35] Vinter RB (2000) Optimal control. Birkhäuser Boston, Basel, Berlin.
  • This article has been cited by:

    1. Yves Achdou, Paola Mannucci, Claudio Marchi, Nicoletta Tchou, Deterministic mean field games with control on the acceleration, 2020, 27, 1021-9722, 10.1007/s00030-020-00634-y
    2. Philip Jameson Graber, Charafeddine Mouzouni, On Mean Field Games models for exhaustible commodities trade, 2020, 26, 1292-8119, 11, 10.1051/cocv/2019008
    3. Pierre Cardaliaguet, Alessio Porretta, 2020, Chapter 1, 978-3-030-59836-5, 1, 10.1007/978-3-030-59837-2_1
    4. Siting Liu, Matthew Jacobs, Wuchen Li, Levon Nurbekyan, Stanley J. Osher, Computational Methods for First-Order Nonlocal Mean Field Games with Applications, 2021, 59, 0036-1429, 2639, 10.1137/20M1334668
    5. Piermarco Cannarsa, Rossana Capuani, Pierre Cardaliaguet, Mean field games with state constraints: from mild to pointwise solutions of the PDE system, 2021, 60, 0944-2669, 10.1007/s00526-021-01936-4
    6. J. Frédéric Bonnans, Justina Gianatti, Laurent Pfeiffer, A Lagrangian Approach for Aggregative Mean Field Games of Controls with Mixed and Final Constraints, 2023, 61, 0363-0129, 105, 10.1137/21M1407720
    7. Rossana Capuani, Antonio Marigonda, Marta Mogentale, 2022, Chapter 34, 978-3-030-97548-7, 297, 10.1007/978-3-030-97549-4_34
    8. Piermarco Cannarsa, Wei Cheng, Cristian Mendico, Kaizhi Wang, Weak KAM Approach to First-Order Mean Field Games with State Constraints, 2021, 1040-7294, 10.1007/s10884-021-10071-9
    9. Saeed Sadeghi Arjmand, Guilherme Mazanti, Multipopulation Minimal-Time Mean Field Games, 2022, 60, 0363-0129, 1942, 10.1137/21M1407306
    10. Saeed Sadeghi Arjmand, Guilherme Mazanti, Nonsmooth mean field games with state constraints, 2022, 28, 1292-8119, 74, 10.1051/cocv/2022069
    11. Rossana Capuani, Antonio Marigonda, Constrained Mean Field Games Equilibria as Fixed Point of Random Lifting of Set-Valued Maps, 2022, 55, 24058963, 180, 10.1016/j.ifacol.2022.11.049
    12. Saeed Sadeghi Arjmand, Guilherme Mazanti, 2021, On the characterization of equilibria of nonsmooth minimal-time mean field games with state constraints, 978-1-6654-3659-5, 5300, 10.1109/CDC45484.2021.9683104
    13. Samuel Daudin, Optimal control of the Fokker-Planck equation under state constraints in the Wasserstein space, 2023, 00217824, 10.1016/j.matpur.2023.05.002
    14. Rossana Capuani, Antonio Marigonda, Michele Ricciardi, Random Lift of Set Valued Maps and Applications to Multiagent Dynamics, 2023, 31, 1877-0533, 10.1007/s11228-023-00693-0
    15. Michael Hintermüller, Thomas M. Surowiec, Mike Theiß, On a Differential Generalized Nash Equilibrium Problem with Mean Field Interaction, 2024, 34, 1052-6234, 2821, 10.1137/22M1489952
    16. Yves Achdou, Paola Mannucci, Claudio Marchi, Nicoletta Tchou, Deterministic Mean Field Games on Networks: A Lagrangian Approach, 2024, 56, 0036-1410, 6689, 10.1137/23M1615073
    17. Guilherme Mazanti, A note on existence and asymptotic behavior of Lagrangian equilibria for first-order optimal-exit mean field games, 2024, 0, 2156-8472, 0, 10.3934/mcrf.2024064
    18. P. Jameson Graber, Remarks on potential mean field games, 2025, 12, 2522-0144, 10.1007/s40687-024-00494-3
    19. Michele Ricciardi, Mauro Rosestolato, Mean field games incorporating carryover effects: optimizing advertising models, 2024, 1593-8883, 10.1007/s10203-024-00500-x
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5575) PDF downloads(778) Cited by(18)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog