1.
Introduction and main results
Recently, an important research activity on mean field games (MFGs for short) has been initiated since the pioneering works [28,29,30] of Lasry and Lions (related ideas have been developed independently in the engineering literature by Huang-Caines-Malhamé, see for example [23,24,25]): it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $ N $ of agents tends to infinity. In these models, it is assumed that the agents are all identical and that an individual agent can hardly influence the outcome of the game. Moreover, each individual strategy is influenced by some averages of functions of the states of the other agents. In the limit when $ N\to +\infty $, a given agent feels the presence of the others through the statistical distribution of the states. Since perturbations of the strategy of a single agent do not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. The delicate question of the passage to the limit is one of the main topics of the book of Carmona and Delarue, [10]. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward in time Kolmogorov or Fokker-Planck (FP) equation and a backward Hamilton-Jacobi-Bellman (HJB) equation. The unknown of this system is a pair of functions: the value function of the stochastic optimal control problem solved by a representative agent and the density of the distribution of states. In the infinite horizon limit, one obtains a system of two stationary PDEs.
A very nice introduction to the theory of MFGs is supplied in the notes of Cardaliaguet [9]. Theoretical results on the existence of classical solutions to the previously mentioned system of PDEs can be found in [19,20,21,28,29,30]. Weak solutions have been studied in [5,30,33,34]. The numerical approximation of these systems of PDEs has been discussed in [1,3,5].
A network (or a graph) is a set of items, referred to as vertices (or nodes or crosspoints), with connections between them referred to as edges. In the recent years, there has been an increasing interest in the investigation of dynamical systems and differential equations on networks, in particular in connection with problems of data transmission and traffic management (see for example [12,14,17]). The literature on optimal control in which the state variable takes its values on a network is recent: deterministic control problems and related Hamilton-Jacobi equations were studied in [2,4,26,27,31,32]. Stochastic processes on networks and related Kirchhoff conditions at the vertices were studied in [15,16].
The present work is devoted to infinite horizon stochastic mean field games taking place on networks. The most important difficulty will be to deal with the transition conditions at the vertices. The latter are obtained from the theory of stochastic control in [15,16], see Section 1.3 below. In [7], the first article on MFGs on networks, Camilli and Marchi consider a particular type of Kirchhoff condition at the vertices for the value function: this condition comes from an assumption which can be informally stated as follows: consider a vertex $ \nu $ of the network and assume that it is the intersection of $ p $ edges $ \Gamma_{1},\dots, \Gamma_{p} $, ; if, at time $ \tau $, the controlled stochastic process $ X_t $ associated to a given agent hits $ \nu $, then the probability that $ X_{\tau^+} $ belongs to $ \Gamma_i $ is proportional to the diffusion coefficient in $ \Gamma_i $. Under this assumption, it can be seen that the density of the distribution of states is continuous at the vertices of the network. In the present work, the above mentioned assumption is not made any longer. Therefore, it will be seen below that the value function satisfies more general Kirchhoff conditions, and accordingly, that the density of the distribution of states is no longer continuous at the vertices; the continuity condition is then replaced by suitable compatibility conditions on the jumps across the vertex. Moreover, as it will be explained in Remark 11 below, more general assumptions on the coupling costs will be made. Mean field games on networks with finite horizon will be considered in a forthcoming paper.
After obtaining the transmission conditions at the vertices for both the value function and the density, we shall prove existence and uniqueness of weak solutions of the uncoupled HJB and FP equations (in suitable Sobolev spaces). We have chosen to work with weak solutions because it is a convenient way to deal with existence and uniqueness in the stationary regime, but also because it is difficult to avoid it in the nonstationary case, see the forthcoming work on finite horizon MFGs. Classical arguments will then lead to the regularity of the solutions. Next, we shall establish the existence result for the MFG system by a fixed point argument and a truncation technique. Uniqueness will also be proved under suitable assumptions.
The present work is organized as follows: the remainder of Section 1 is devoted to setting the problem and obtaining the system of differential equations and the transmission conditions at the vertices. Section 2 contains useful results, first about some linear boundary value problems with elliptic equations, then on a pair of linear Kolmogorov and Fokker-Planck equations in duality. By and large, the existence of weak solutions is obtained by applying Banach-Necas-Babuška theorem to a special pair of Sobolev spaces referred to as $ V $ and $ W $ below and Fredholm's alternative, and uniqueness comes from a maximum principle. Section 3 is devoted to the HJB equation associated with an ergodic problem. Finally, the proofs of the main results of existence and uniqueness for the MFG system of differential equations are completed in Section 1.
1.1. Networks and function spaces
1.1.1. The geometry
A bounded network $ \Gamma $ (or a bounded connected graph) is a connected subset of $ \mathbb R ^n $ made of a finite number of bounded non-intersecting straight segments, referred to as edges, which connect nodes referred to as vertices. The finite collection of vertices and the finite set of closed edges are respectively denoted by $ \mathcal{V}: = \left\{ \nu_{i}, i\in I\right\} $ and $ \mathcal{E}: = \left\{ \Gamma_{\alpha}, \alpha\in\mathcal{A}\right\} $, where $ I $ and $ A $ are finite sets of indices contained in $ \mathbb N $. We assume that for $ \alpha,\beta \in \mathcal A $, if $ \alpha\not = \beta $, then $ \Gamma_\alpha\cap \Gamma_\beta $ is either empty or made of a single vertex. The length of $ \Gamma_{\alpha} $ is denoted by $ \ell_{\alpha} $. Given $ \nu_{i}\in\mathcal{V} $, the set of indices of edges that are adjacent to the vertex $ \nu_{i} $ is denoted by $ \mathcal{A}_{i} = \left\{ \alpha\in\mathcal{A}:\nu_{i}\in\Gamma_{\alpha}\right\} $. A vertex $ \nu_{i} $ is named a boundary vertex if $ \sharp\left(\mathcal{A}_{i}\right) = 1 $, otherwise it is named a transition vertex. The set containing all the boundary vertices is named the boundary of the network and is denoted by $ \partial\Gamma $ hereafter.
The edges $ \Gamma_{\alpha}\in\mathcal{E} $ are oriented in an arbitrary manner. In most of what follows, we shall make the following arbitrary choice that an edge $ \Gamma_{\alpha}\in\mathcal{E} $ connecting two vertices $ \nu_i $ and $ \nu_j $, with $ i<j $ is oriented from $ \nu_i $ toward $ \nu_j $: this induces a natural parametrization $ \pi_\alpha: [0,\ell_\alpha]\to \Gamma_\alpha = [\nu_i,\nu_j] $:
For a function $ v:\Gamma\rightarrow\mathbb{R} $ and $ \alpha\in\mathcal{A} $, we define $ v_\alpha: (0,\ell_\alpha)\rightarrow\mathbb{R} $ by
The function $ v_\alpha $ is a priori defined only in $ (0,\ell_\alpha) $. When it is possible, we extend it by continuity at the boundary by setting
In that latter case, we can define
Notice that $ v|_{\Gamma_{\alpha}} $ does not coincide with the original function $ v $ at the vertices in general when $ v $ is not continuous.
Remark 1. In what precedes, the edges have been arbitrarily oriented from the vertex with the smaller index toward the vertex with the larger one. Other choices are of course possible. In particular, by possibly dividing a single edge into two, adding thereby new artificial vertices, it is always possible to assume that for all vertices $ \nu_i\in\mathcal{V} $,
This idea was used by Von Below in [35]: some edges of $ \Gamma $ are cut into two by adding artificial vertices so that the new oriented network $ \overline{\Gamma} $ has the property (3), see Figure 1 for an example.
In Sections 1.2 and 1.3 below, especially when dealing with stochastic calculus, it will be convenient to assume that property (3) holds. In the remaining part of the paper, it will be convenient to work with the original network, i.e., without the additional artificial vertices and with the orientation of the edges that has been chosen initially.
1.1.2. Function spaces
The set of continuous functions on $ \Gamma $ is denoted by $ C(\Gamma) $ and we set
By the definition of piecewise continuous functions $ v\in PC(\Gamma) $, for all $ \alpha\in \mathcal{A} $, it is possible to define $ v|_{\Gamma_\alpha} $ by (2) and we have $ v|_{\Gamma_\alpha}\in C(\Gamma_\alpha) $, $ v_\alpha\in C([0,\ell_{\alpha}]) $.
For $ m\in\mathbb{N} $, the space of $ m $-times continuously differentiable functions on $ \Gamma $ is defined by
Notice that $ v\in C^{m}\left(\Gamma\right) $ is assumed to be continuous on $ \Gamma $, and that its restriction $ v_{|\Gamma_\alpha} $ to each edge $ \Gamma_\alpha $ belongs to $ C^m(\Gamma_\alpha) $. The space $ C^{m}\left(\Gamma\right) $ is endowed with the norm $ \left\Vert v\right\Vert _{C^{m}\left(\Gamma\right)}: = {\sum}_{\alpha\in\mathcal{A}}{\sum}_{k\le m}\left\Vert \partial^{k}v_{\alpha}\right\Vert _{L^{\infty}\left(0,\ell_{\alpha}\right)} $. For $ \sigma\in\left(0,1\right) $, the space $ C^{m,\sigma}\left(\Gamma\right) $, contains the functions $ v\in C^{m}\left(\Gamma\right) $ such that $ \partial^{m}v_{\alpha}\in C^{0,\sigma}\left(\left[0,\ell_{\alpha}\right]\right) $ for all $ \alpha\in \mathcal{A} $; it is endowed with the norm $ { \left\Vert v\right\Vert _{C^{m,\sigma}\left(\Gamma\right)}: = \left\Vert v\right\Vert _{C^{m}\left(\Gamma\right)}+\sup\limits_{\alpha\in\mathcal{A}}\sup\limits_{y\ne z \atop y,z\in\left[0,\ell_{\alpha}\right]}\dfrac{\left|\partial^{m}v_{\alpha}\left(y\right)-\partial^{m}v_{\alpha}\left(z\right)\right|}{\left|y-z\right|^{\sigma}}} $.
For a positive integer $ m $ and a function $ v\in C^{m}\left(\Gamma\right) $, we set for $ k\le m $,
For a vertex $ \nu $, we define $ \partial_{\alpha}v\left(\nu\right) $ as the outward directional derivative of $ v|_{\Gamma_{\alpha}} $ at $ \nu $ as follows:
For all $ i\in I $ and $ \alpha\in \mathcal A_i $, setting
we have
Remark 2. Changing the orientation of the edge does not change the value of $ \partial_\alpha v(\nu) $ in (5).
If for all $ \alpha\in\mathcal{A} $, $ v_{\alpha} $ is Lebesgue-integrable on $ (0,\ell_{\alpha}) $, then the integral of $ v $ on $ \Gamma $ is defined by $ \int_{\Gamma}v\left(x\right)dx = \sum_{\alpha\in\mathcal{A}}\int_{0}^{\ell_{\alpha}}v_{\alpha}\left(y\right)dy $. The space
$ p\in [1,\infty] $, is endowed with the norm $ \left\Vert v\right\Vert _{L^{p}\left(\Gamma\right)}: = \left( \sum_{\alpha\in\mathcal{A}}\left\Vert v_{\alpha}\right\Vert _{L^{p} \left(0,\ell_{\alpha}\right)}^p \right)^{\frac 1 p} $ if $ 1\le p<\infty $, and $ \max_{\alpha\in \mathcal A} \|v_\alpha\|_{L^\infty\left(0,\ell_{\alpha}\right)} $ if $ p = +\infty $. We shall also need to deal with functions on $ \Gamma $ whose restrictions to the edges are weakly-differentiable: we shall use the same notations for the weak derivatives. Let us introduce Sobolev spaces on $ \Gamma $:
Definition 1.1. For any integer $ s\ge 1 $ and any real number $ p\ge 1 $, the Sobolev space $ W^{s,p}(\Gamma) $ is defined as follows: $ W^{s,p}(\Gamma): = \left\{ v\in C\left(\Gamma\right):v_{\alpha}\in W^{s,p}\left(0,\ell_{\alpha}\right) \;\forall \alpha\in\mathcal{A}\right\} $, and endowed with the norm $ \left\Vert v\right\Vert _{W^{s,p}\left(\Gamma\right)} = \left(\sum\limits^{s}_{k = 1}\sum\limits_{\alpha\in\mathcal{A}} \left\Vert \partial^{k}v_{\alpha}\right\Vert _{L^{p}\left(0,\ell_{\alpha}\right)}^{p}+ \left\Vert v\right\Vert _{L^p(\Gamma)}^{p}\right)^{\frac 1 p} $. We also set $ H^s(\Gamma) = W^{s,2}(\Gamma) $.
1.2. A class of stochastic processes on $ \Gamma $
After rescaling the edges, it may be assumed that $ \ell_{\alpha} = 1 $ for all $ \alpha\in\mathcal{A} $. Let $ \mu_{\alpha} , \alpha\in\mathcal{A} $ and $ p_{i\alpha}, i\in I,\alpha\in\mathcal{A}_{i} $ be positive constants such that $ \sum_{\alpha\in\mathcal{A}_{i}}p_{i\alpha} = 1 $. Consider also a real valued function $ a\in PC(\Gamma) $ such that for all $ \alpha \in \mathcal A $, $ a|_{\Gamma{_\alpha}} $ is Lipschitz continuous.
As in Remark 1, we make the assumption (3) by possibly adding artificial nodes: if $ \nu_i $ is such an artificial node, then $ \sharp( \mathcal A_i) = 2 $, and we assume that $ p_{i\alpha} = 1/2 $ for $ \alpha\in \mathcal A_i $. The diffusion parameter $ \mu $ has the same value on the two sides of an artificial vertex. Similarly, the function $ a $ does not have jumps across an artificial vertex.
Let us consider the linear differential operator:
with domain
Remark 3. Note that in the definition of $ D\left(\mathcal{L}\right) $, the condition at boundary vertices boils down to a Neumann condition.
Freidlin and Sheu proved in [15] that
1. The operator $ \mathcal L $ is the infinitesimal generator of a Feller-Markov process on $ \Gamma $ with continuous sample paths. The operators $ \mathcal L_\alpha $ and the transmission conditions at the vertices
define such a process in a unique way, see also [16,Theorem 3.1]. The process can be written $ (X_t, \alpha_t) $ where $ X_t\in \Gamma_{\alpha_t} $. If $ X_t = \nu_i $, $ i\in I $, $ \alpha_t $ is arbitrarily chosen as the smallest index in $ \mathcal A_i $. Setting $ x_t = \pi_{\alpha_t}(X_t) $ defines the process $ x_t $ with values in $ [0,1] $.
2.There exist
(a) a one dimensional Wiener process $ W_t $,
(b) continuous non-decreasing processes $ \ell_{i,t} $, $ i\in I $, which are measurable with respect to the $ \sigma $-field generated by $ (X_t,\alpha_t) $,
(c) continuous non-increasing processes $ h_{i,t} $, $ i\in I $, which are measurable with respect to the $ \sigma $-field generated by $ (X_t,\alpha_t) $,
such that
3. The following Ito formula holds: for any real valued function $ u \in C^2(\Gamma) $:
Remark 4. The assumption that all the edges have unit length is not restrictive, because we can always rescale the constants $ \mu_\alpha $ and the piecewise continuous function $ a $. The Ito formula in (12) holds when this assumption is not satisfied.
Consider the invariant measure associated with the process $ X_t $. We may assume that it is absolutely continuous with respect to the Lebesgue measure on $ \Gamma $. Let $ m $ be its density:
We focus on functions $ u\in D\left(\mathcal{L}\right) $. Taking the time-derivative of each member of (13), Ito's formula (12) and (10) lead to $ \mathbb{E}\left[ \mathbb{1}_{\{X_t\notin \mathcal V\}} \left(a\partial u (X_t)+\mu\partial^{2}u(X_t)\right) \right] = 0 $. This implies that
Since for $ \alpha \in \mathcal A $, any smooth function on $ \Gamma $ compactly supported in $ \Gamma_\alpha \backslash \mathcal V $ clearly belongs to $ D({\mathcal L}) $, (14) implies that $ m $ satisfies
in the sense of distributions in the edges $ \Gamma_\alpha \backslash \mathcal V $, $ \alpha\in \mathcal A $. This implies that there exists a real number $ c_\alpha $ such that
So $ m|_{\Gamma_\alpha} $ is $ C^1 $ regular, and (16) is true pointwise. Using this information and recalling (14), we find that, for all $ u\in D( \mathcal L) $,
This and (16) imply that
For all $ i\in I $, it is possible to choose a function $ u\in D( \mathcal L) $ such that
1. $ u(\nu_j) = \delta_{i,j} $ for all $ j\in I $;
2. $ \partial_\alpha u(\nu_j) = 0 $ for all $ j\in I $ and $ \alpha\in \mathcal A_j $.
Using such a test-function in (17) implies that for all $ i\in I $,
where $ n_{i\alpha} $ is defined in (6).
For all $ i\in I $ and $ \alpha,\beta\in \mathcal A_i $, it is possible to choose a function $ u\in D( \mathcal L) $ such that
1. $ u $ takes the same value at each vertex of $ \Gamma $, thus $ \int_{\Gamma_\delta} \partial u|_{\Gamma_\delta}(x)dx = 0 $ for all $ \delta\in \mathcal A $;
2. $ \partial_\alpha u(\nu_i) = 1/p_{i\alpha} $, $ \partial_\beta u(\nu_i) = -1/p_{i\beta} $ and all the other first order directional derivatives of $ u $ at the vertices are $ 0 $.
Using such a test-function in (17) yields
in which
Next, for $ i\in I $, multiplying (16) at $ x = \nu_i $ by $ n_{i\alpha} $ for all $ \alpha\in \mathcal A_i $, then summing over all $ \alpha\in \mathcal A_i $, we get $ \sum_{\alpha\in \mathcal A_i} \mu_{\alpha}\partial_\alpha m \left(\nu_{i}\right) - n_{i\alpha}\Bigl( m|_{\Gamma_\alpha}\left(\nu_{i}\right) a|_{\Gamma_\alpha}\left(\nu_{i}\right)-c_\alpha\Bigr) = 0 $, and using (18), we obtain that
Summarizing, we get the following boundary value problem for $ m $ (recalling that the coefficients $ n_{i\alpha} $ are defined in (6)):
1.3. Formal derivation of the MFG system on $ \Gamma $
Consider a continuum of indistinguishable agents moving on the network $ \Gamma $. The state of a representative agent at time $ t $ is a time-continuous controlled stochastic process $ X_t $ as defined in Section 1.2, where the control is the drift $ a_t $, supposed to be of the form $ a_t = a(X_t) $. The function $ X\mapsto a(X) $ is the feedback. Let $ m(\cdot, t) $ be the probability measure on $ \Gamma $ that describes the distribution of states at time $ t $.
For a representative agent, the optimal control problem is of the form:
where $ \mathbb{E}_{x} $ stands for the expectation conditioned by the event $ X_0 = x $. The running cost depends separately on the control and on the distribution of states.
● The contribution of the control involves the Lagrangian $ L $, i.e., a real valued function defined on $ \left(\cup_{\alpha \in \mathcal{A}} \Gamma_\alpha \backslash \mathcal{V} \right) \times \mathbb R $. If $ x\in \Gamma_\alpha \backslash \mathcal{V} $ and $ a\in \mathbb R $, $ L(x,a) = L_\alpha(\pi_\alpha^{-1}(x),a) $, where $ L_\alpha $ is a continuous real valued function defined on $ [0,\ell_\alpha]\times \mathbb R $. We assume that $ \lim_{|a|\to \infty} \inf_{y\in \Gamma_\alpha} {L_\alpha(y,a)}/{|a|} = +\infty $.
● The contribution of the distribution of states involves the coupling cost operator, which can either be nonlocal, i.e., $ V:\mathcal{P}\left(\Gamma\right)\rightarrow \mathcal{C}^2 (\Gamma) $ (where $ \mathcal{P}\left(\Gamma\right) $ is the set of Borel probability measures on $ \Gamma $), or local, i.e., $ V[m](x) = F(m(x)) $ for a continuous function $ F: \mathbb R^+ \to \mathbb R $, assuming that $ m $ is absolutely continuous with respect to the Lebesgue measure and identifying with its density.
Further assumptions on $ L $ and $ V $ will be made below.
Let us assume that there is an optimal feedback law, i.e. a function $ a^\star $ defined on $ \Gamma $ which is sufficiently regular in the edges of the network, such that the optimal control at time $ t $ is given by $ a_t ^\star = a^\star(X_t) $. Then, almost surely if $ X_t\in \Gamma_\alpha\backslash \mathcal V $, $ d \pi_{\alpha}^{-1} (X_t) = a^\star_{\alpha}(\pi_{\alpha}^{-1} (X_t)) dt + \sqrt{ 2\mu_{\alpha}} dW_t $. An informal way to describe the behavior of the process at the vertices is as follows: if $ X_t $ hits $ \nu_{i}\in\mathcal{V} $, then it enters $ \Gamma_{\alpha} $, $ \alpha\in\mathcal{A}_{i} $ with probability $ p_{i\alpha}>0 $.
Under suitable assumptions, the Ito calculus recalled in Section 1.2 and the dynamic programming principle lead to the following ergodic Hamilton-Jacobi equation on $ \Gamma $, more precisely the following boundary value problem:
We refer to [28,30] for the interpretation of the value function $ v $ and the ergodic cost $ \rho $.
Let us comment the different equations in (23):
1. The Hamiltonian $ H $ is a real valued function defined on $ \left(\cup_{\alpha \in \mathcal{A}} \Gamma_\alpha \backslash \mathcal{V} \right) \times \mathbb R $. For $ x\in \Gamma_\alpha \backslash \mathcal{V} $ and $ p\in \mathbb R $,
The Hamiltonians $ H|_{\Gamma_\alpha\times \mathbb R} $ are supposed to be $ C^1 $ and coercive with respect to $ p $ uniformly in $ x $ (see Section 1.4.1).
2. The second equation in (23) is a Kirchhoff transmission condition (or Neumann boundary condition if $ \nu_i\in\partial\Gamma $); it is the consequence of the assumption on the behavior of $ X_s $ at vertices. It involves the positive constants $ \gamma_{i\alpha} $ defined in (19).
3. The third condition means in particular that $ v $ is continuous at the vertices.
4. The fourth equation is a normalization condition.
If (23) has a smooth solution, then it provides a feedback law for the optimal control problem, i.e.
At the MFG equilibrium, $ m $ is the density of the invariant measure associated with the optimal feedback law, so, according to Section 1.2, it satisfies (21), where $ a $ is replaced by $ a^\star = -\partial_{p}H\left(x,\partial v\left(x\right)\right) $. We end up with the following system:
At a vertex $ \nu_i $, $ i\in I $, the transmission conditions for both $ v $ and $ m $ consist of $ d_{\nu_{i}} = \sharp( \mathcal A_i) $ linear relations, which is the appropriate number of relations to have a well posed problem. If $ \nu_i\in \partial \Gamma $, there is of course only one Neumann like condition for $ v $ and for $ m $.
Remark 5. In [7], the authors assume that $ \gamma_{i\alpha} = \gamma_{i\beta} $ for all $ i\in I $, $ \alpha,\beta\in\mathcal{A}_i $. Therefore, the density $ m $ does not have jumps across the transition vertices.
1.4. Assumptions and main results
1.4.1. Assumptions
Let $ (\mu_\alpha)_{\alpha\in \mathcal{A}} $ be a family of positive numbers, and for each $ i\in I $ let $ ( \gamma_{i\alpha})_{\alpha\in\mathcal{A}_{i}} $ be a family of positive numbers such that $ \sum_{\alpha\in \mathcal A_i} \gamma_{i\alpha}\mu_\alpha = 1 $.
Consider the Hamiltonian $ H:\Gamma\times\mathbb{R}\rightarrow\mathbb{R} $. We assume that for all $ \alpha\in\mathcal{A} $, we can define $ H_\alpha: [0,\ell_{\alpha}]\times\mathbb{R}\to \mathbb R $ and $ H|_{\Gamma_{\alpha}}: \Gamma_{\alpha}\times\mathbb{R}\rightarrow\mathbb{R} $ by (2) and, for some positive constants $ C_{0},C_{1},C_{2} $ and $ q\in\left(1,2\right] $,
Remark 6. The Hamiltonian $ H $ is discontinuous at the vertices in general, although $ H_\alpha $ is $ C^1 $ up to the endpoints of $ \left[0,\ell_{\alpha}\right] $.
Remark 7. From (28), there exists a positive constant $ C_{q} $ such that
Below, we shall focus on local coupling operators $ V $, namely
for all $ \tilde{m} $ which are absolutely continuous with respect to the Lebesgue measure and such that $ d\tilde{m}\left(x\right) = m\left(x\right)dx $. We shall also suppose that $ F $ is bounded from below, i.e., there exists a positive constant $ M $ such that
1.4.2. Function spaces related to the Kirchhoff conditions
Let us introduce two function spaces on $ \Gamma $, which will be the key ingredients in order to build weak solutions of (24).
Definition 1.2. We define two Sobolev spaces, $ V: = H^1(\Gamma) $, see Definition 1.1, and
which is also a Hilbert space, endowed with the norm $ \left\Vert w\right\Vert _{W} = \left(\sum\limits_{\alpha\in\mathcal{A}}\left\Vert w_{\alpha}\right\Vert^2 _{H^{1}\left(0,\ell_{\alpha}\right)}\right)^{\frac 1 2} $.
Remark 8. We point out that, following Definition 1.1, functions in $ V $ are continuous on $ \Gamma $. By contrast, functions in $ W $ are discontinuous in general.
Definition 1.3. Let the functions $ \psi\in W $ and $ \phi\in PC(\Gamma) $ be defined as follows:
Note that both functions $ \psi,\phi $ are positive and bounded. We set $ \overline{\psi} = \max_{\Gamma}\psi $, $ \underline{\psi} = \min_{\Gamma}\psi $, $ \overline{\phi} = \max_{\Gamma}\phi $, $ \underline{\phi} = \min_{\Gamma}\phi $.
Remark 9. One can see that $ v\in V\longmapsto v\psi $ is an isomorphism from $ V $ onto $ W $ and $ w\in W\longmapsto w\phi $ is the inverse isomorphism.
Definition 1.4. Let the function space $ \mathcal{W}\subset W $ be defined as follows:
Remark 10. A function $ m\in \mathcal{W} $ is in general discontinuous at the vertices of $ \Gamma $, although for any $ \alpha\in \mathcal A $, $ m_\alpha $ is $ C^1 $ in $ \left[0,\ell_{\alpha}\right] $.
1.4.3. Main result
Definition 1.5. A solution of the Mean Field Games system (24) is a triple $ \left(v,\rho,m\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R}\times\mathcal{W} $ such that $ \left(v,\rho\right) $ is a classical solution of
(note that $ v $ is continuous at the vertices from the definition of $ C^2 (\Gamma) $), and $ m $ satisfies
where $ V $ is given in Definition 1.2.
We are ready to state the main result:
Theorem 1.6. If assumptions (25)-(28) and (30)-(31) are satisfied, then there exists a solution $ \left(v,m,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ of (24). If $ F $ is locally Lipschitz continuous, then $ v \in C^{2,1}(\Gamma) $. Moreover if $ F $ is strictly increasing, then the solution is unique.
Remark 11. The proof of the existence result in [7] is valid only in the case when the coupling cost $ F $ is bounded.
Remark 12. The existence result in Theorem 1.6 holds if we assume that the coupling operator $ V $ is non local and regularizing, i.e., $ V $ is a continuous map from $ \mathcal{P} $ to a bounded subset of $ \mathcal{F} $, with $ \mathcal{F}: = \left\{ f:\Gamma\rightarrow\mathbb{R}:\;f|_{\Gamma_{\alpha}}\in C^{0,\sigma}\left(\Gamma_{\alpha}\right)\right\} $. The proof, omitted in what follows, is similar to that of Lemma 4.1 below.
2.
Preliminary: A class of linear boundary value problems
This section contains elementary results on the solvability of some linear boundary value problems on $ \Gamma $. To the best of our knowledge, these results are not available in the literature.
2.1. A first class of problems
We recall that the constants $ \mu_\alpha $ and $ \gamma_{i\alpha} $ are defined in Section 1.2. Let $ \lambda $ be a positive number. We start with very simple linear boundary value problems, in which the only difficulty is the Kirchhoff condition:
where $ f\in W' $, $ W' $ is the topological dual of $ W $.
Remark 13. We have already noticed that, if $ \nu_{i}\in\partial\Gamma $, the last condition in (38) boils down to a standard Neumann boundary condition $ \partial_{\alpha}v\left(\nu_{i}\right) = 0 $, in which $ \alpha $ is the unique element of $ \mathcal A_i $. Otherwise, if $ \nu_{i}\in\mathcal{V}\backslash\partial\Gamma $, the last condition in (38) is the Kirchhoff condition discussed above.
Definition 2.1. A weak solution of (38) is a function $ v\in V $ such that
where $ \mathscr{B}_{\lambda}:V\times W\rightarrow\mathbb{R} $ is the bilinear form defined as follows:
Remark 14. Formally, (39) is obtained by testing the first line of (38) by $ w\in W $, integrating by part the left hand side on each $ \Gamma_\alpha $ and summing over $ \alpha\in \mathcal A $. There is no contribution from the vertices, because of the Kirchhoff conditions on the one hand and on the other hand the jump conditions satisfied by the elements of $ W $.
Remark 15. By using the fact that $ \Gamma_\alpha $ are line segments, i.e. one dimensional sets and solving the differential equations, we see that if $ v $ is a weak solution of $ (38) $ with $ f\in PC\left(\Gamma\right) $, then $ v\in C^{2}\left(\Gamma\right) $.
Let us first study the homogeneous case, i.e. $ f = 0 $.
Lemma 2.2. The function $ v = 0 $ is the unique solution of the following boundary value problem
Proof. Let $ \mathcal{I}_{i}: = \left\{ k\in I:\; k\not = i;\; \nu_{k}\in\Gamma_{\alpha}\hbox{ for some }\alpha\in\mathcal{A}_{i}\right\} $ be the set of indices of the vertices which are connected to $ \nu_{i} $. By Remark 1, it is not restrictive to assume (in the remainder of the proof) that for all $ k\in\mathcal{I}_{i} $, $ \Gamma_{\alpha} = \Gamma_{\alpha_{ik}} = \left[\nu_{i},\nu_{k}\right] $ is oriented from $ \nu_{i} $ to $ \nu_{k} $.
For $ k\in\mathcal{I}_{i} $, $ \Gamma_\alpha = [\nu_i, \nu_k] $, using the parametrization (1), the linear differential eqaution (40) in the edge $ \Gamma_{\alpha} $ is
whose solution is
with
It follows that $ \partial_{\alpha}v\left(\nu_{i}\right) = -\sqrt{\lambda}\xi_{\alpha} = -\dfrac{\sqrt{\lambda}}{\sinh\left(\sqrt{\lambda}\ell_{\alpha}\right)}\left[v\left(\nu_{k}\right)-v\left(\nu_{i}\right)\cosh\left(\sqrt{\lambda}\ell_{\alpha}\right)\right] $. Hence, the transmission condition in (40) becomes: for all $ i\in I $,
Therefore, we obtain a system of linear equations of the form $ MU = 0 $ with $ M = \left(M_{ij}\right)_{1\le i,j\le N} $, $ N = \sharp(I) $, and $ U = \left(v\left(\nu_{1}\right),\ldots,v\left(\nu_{N}\right)\right)^{T} $, where $ M $ is defined by
For all $ i\in I $, since $ \cosh\left(\sqrt{\lambda}\ell_{\alpha_{ik}}\right)>1 $ for all $ k\in\mathcal{I}_{i} $, the sum of the entries on each row is positive and $ M $ is diagonal dominant. Thus, $ M $ is invertible and $ U = 0 $ is the unique solution of the system. Finally, by solving the ODE in each edge $ \Gamma_{\beta} $ with $ v_{\beta}\left(0\right) = v_{\beta}\left(\ell_{\beta}\right) = 0 $, we get that $ v = 0 $ on $ \Gamma $.
Let us now study the non-homogeneous problems (38).
Lemma 2.3. For any $ f $ in $ W' $, (38) has a unique weak solution $ v $ in $ V $, see Definition 1.2. Moreover, there exists a constant $ C $ such that $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'}. $
Proof. First of all, we claim that for $ \lambda_{0}>0 $ large enough and any $ f\in W' $, the problem
has a unique solution $ v\in V $. Let us prove the claim. Let $ v\in V $, then $ \hat{w}: = v\psi $ belongs to $ W $, where $ \psi $ is given by Definition 1.3. Let us set $ \overline{\partial\psi}: = \max_{\Gamma}\left|\partial\psi\right| $ and $ \underline{\psi}: = \min_{\Gamma}\psi >0 $, ($ \partial\psi $ is bounded, see Definition 1.3); we get
When $ \lambda_{0}\ge\dfrac{\mu_{\alpha}}{2}+\dfrac{\mu_{\alpha}\overline{\partial\psi}^{2}}{2\underline{\psi}^{2}} $ for all $ \alpha\in\mathcal{A} $, we obtain that $ \mathscr{B}_{\lambda}\left(v,\hat{w}\right)+\lambda_{0}\left(v,\hat{w}\right)\ge\dfrac{\underline{\mu}\; \underline{\psi}}{2}\left\Vert v\right\Vert _{V}^{2}\ge\dfrac{\underline{\mu}\; \underline{\psi}}{2C_{\psi}}\left\Vert v\right\Vert _{V}\left\Vert \hat{w}\right\Vert _{W} $, using the fact that, from Remark 9, there exists a positive constant $ C_{\psi} $ such that $ \left\Vert v\psi\right\Vert _{W}\le C_{\psi}\left\Vert v\right\Vert _{V} $ for all $ v\in V $. This yields
Using a similar argument for any $ w\in W $ and $ \hat{v} = w\phi $, where $ \phi $ is given in Definition 1.3, we obtain that for $ \lambda_{0} $ large enough, there exist a positive constant $ C_{\phi} $ such that
From (44) and (45), by the Banach-Necas-Babuška lemma (see [13]), for $ \lambda_{0} $ large enough, for any $ f\in W' $, there exists a unique solution $ v\in V $ of (42) and $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'} $ for a positive constant $ C $. Hence, our claim is proved.
Now, we fix $ \lambda_{0} $ large enough and we define the continuous linear operator $ \overline{R}_{\lambda_{0}}:W'\rightarrow V $ where $ \overline{R}_{\lambda_{0}}\left(f\right) = v $ is the unique solution of (42). Since the injection $ \mathcal{I} $ from $ V $ to $ W' $ is compact, then $ \mathcal{I}\circ\overline{R}_{\lambda_{0}} $ is a compact operator from $ W' $ into $ W' $. By the Fredholm alternative (see [18]), one of the following assertions holds:
We claim that (47) holds. Indeed, assume by contradiction that (46) holds. Then there exists $ \overline{v}\ne0 $ such that $ \overline{v}\in V $ and $ \mathcal{I}\circ\overline{R}_{\lambda_{0}}\overline{v} = \dfrac{\overline{v}}{\lambda_{0}} $. Therefore, $ \overline{v}\in V $, and $ \mathscr{B}_{\lambda}\left(\dfrac{\overline{v}}{\lambda_{0}},w\right)+\lambda_{0}\left(\dfrac{\overline{v}}{\lambda_{0}},w\right) = \left(\overline{v},w\right) $, for all $ w\in W $. This yields that $ \mathscr{B}_{\lambda}\left(\overline{v},w\right) = 0 $ for all $ w\in W $ and by Lemma 2.2, we get that $ \overline{v} = 0 $, which leads us to a contradiction. Hence, our claim is proved.
Then, (47) implies that there exists a positive constant $ C $ such that for all $ f\in W' $, (38) has a unique weak solution $ v $ and that $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'} $, see [11] for the details.
2.2. The Kolmogorov equation
Consider $ b\in PC\left(\Gamma\right) $. This paragraph is devoted to the following boundary value problem including a Kolmogorov equation
Definition 2.4. A weak solution of (48) is a function $ v\in V $ such that
where $ \mathscr{A}^{\star}:V\times W\rightarrow\mathbb{R} $ is the bilinear form defined by
As in Remark 15, if $ v $ is a weak solution of $ (48) $, then $ v\in C^{2}\left(\Gamma\right) $.
The uniqueness of solutions of (48) up to the addition of constants is obtained by using a maximum principle:
Lemma 2.5. For $ b\in PC\left(\Gamma\right) $, the solutions of (48) are the constant functions on $ \Gamma $.
Proof of Lemma 2.5. First of all, any constant function on $ \Gamma $ is a solution of (48). Now let $ v $ be a solution of (48) then $ v\in C^{2}\left(\Gamma\right) $. Assume that the maximum of $ v $ over $ \Gamma $ is achieved in $ \Gamma_{\alpha} $; by the maximum principle, it is achieved at some endpoint $ \nu_{i} $ of $ \Gamma_{\alpha} $. Without loss of generality, using Remark 1, we can assume that $ \pi_{\beta}\left(\nu_{i}\right) = 0 $ for all $ \beta\in\mathcal{A}_{i} $. We have $ \partial_{\beta}v\left(\nu_{i}\right)\ge 0 $ for all $ \beta\in\mathcal{A}_{i} $ because $ \nu_{i} $ is the maximum point of $ v $. Since all the coefficients $ \gamma_{i\beta},\mu_{\beta} $ are positive, by the Kirchhoff condition if $ \nu_{i} $ is a transition vertex, or by the Neumann boundary condition if $ \nu_{i} $ is a boundary vertex, we infer that $ \partial_{\beta}v\left(\nu_{i}\right) = 0 $ for all $ \beta\in\mathcal{A}_{i} $. This implies that $ \partial{v_{\beta}} $ is a solution of the first order linear homogeneous differential equation $ u'+b_{\beta}u = 0 $, on $ \left[0,\ell_{\beta}\right] $, with $ u\left(0\right) = 0 $. Therefore, $ \partial v_{\beta}\equiv 0 $ and $ v $ is constant on $ \Gamma_{\beta} $ for all $ \beta \in\mathcal{A}_{i} $. We can propagate this argument, starting from the vertices connected to $ \nu_i $. Since the network $ \Gamma $ is connected and $ v $ is continuous, we obtain that $ v $ is constant on $ \Gamma $.
2.3. The dual Fokker-Planck equation
This paragraph is devoted to the dual boundary value problem of (48); it involves a Fokker-Planck equation:
where $ b\in PC\left(\Gamma\right) $, with
First of all, let $ \lambda_0 $ be a nonnegative constant; for all $ h\in V' $, we introduce the modified boundary value problem
Definition 2.6. For $ \lambda\in \mathbb R $, consider the bilinear form $ \mathscr{A}_{\lambda}:W\times V\rightarrow\mathbb{R} $ defined by
A weak solution of (51) is a function $ m\in W $ such that
A weak solution of (49) is a function $ m\in W $ such that
Remark 16. Formally, to get (52), we multiply the first line of (49) by $ v\in V $, integrate by part, sum over $ \alpha\in\mathcal{A} $ and use the third line of (49) to see that there is no contribution from the vertices.
Theorem 2.7. For any $ b\in PC\left(\Gamma\right) $,
● (Existence) There exists a solution $ \widehat{m} \in W $ of (49)-(50) satisfying
where the constant $ C $ depends only on $ \left\Vert b\right\Vert _{\infty} $ and $ \left\{ \mu_{\alpha}\right\} _{\alpha\in A} $. Moreover, $ \widehat{m}_{\alpha}\in C^{1}\left(0,\ell_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $. Hence, $ \widehat{m}\in\mathcal{W} $.
● (Uniqueness) $ \widehat{m} $ is the unique solution of (49)-(50).
● (Strictly positive solution) $ \widehat{m} $ is strictly positive.
Proof of existence in Theorem 2.7. We divide the proof of existence into three steps:
Step 1. Let $ \lambda_{0} $ be a large positive constant that will be chosen later. We claim that for $ \overline{m}\in L^2(\Gamma) $ and $ h: = \lambda_0 \overline{m} \in L^2(\Gamma)\subset V' $, (51) has a unique solution $ m\in W $. This allows us to define a linear operator as follows:
where $ m $ is the solution of (51) with $ h = \lambda_0 \overline{m} $. We are going to prove that $ T $ is well-defined and continuous, i.e, for all $ \overline{m}\in L^{2}\left(\Gamma\right) $, (51) has a unique solution that depends continuously on $ \overline{m} $. For $ w\in W $, set $ \widehat{v}: = w\phi\in V $ where $ \phi $ is given by Definition 1.3. We have
It follows that when $ \lambda_{0} $ is large enough (larger than a constant that only depends on $ b,\phi $ and $ \mu_{\alpha} $), $ \mathscr{A}_{\lambda_{0}}\left(w,\widehat{v}\right)\ge\widehat{C}_{\lambda_{0}}\left\Vert w\right\Vert _{W}^{2} $ for some positive constant $ \widehat{C}_{\lambda_{0}} $. Moreover, by Remark 9, there exists a positive constant $ \widehat{C}_{\phi} $ such that for all $ w\in W $, we have $ \left\Vert w\phi\right\Vert _{V}\le C_{\phi}\left\Vert w\right\Vert _{W} $. This yields
Using similar arguments, for $ \lambda_{0} $ large enough, there exist two positive constants $ C_{\lambda_{0}} $ and $ C_{\psi} $ such that
From Banach-Necas-Babuška lemma (see [13]), there exists a constant $ \overline{C} $ such that for all $ \overline{m}\in L^{2}\left(\Gamma\right) $, there exists a unique solution $ m $ of (51) with $ h = \lambda_0 \overline{m} $ and $ \left\Vert m\right\Vert _{W}\le\overline{C}\left\Vert \overline{m}\right\Vert _{L^{2}\left(\Gamma\right)} $. Hence, the map $ T $ is well-defined and continuous from $ L^{2}\left(\Gamma\right) $ to $ W $.
Step 2. Let $ K $ be the set defined by
We claim that $ T\left(K\right)\subset K $ which means $ \int_{\Gamma}m = 1 $ and $ m\ge0 $. Indeed, using $ v = 1 $ as a test-function in (51), we have $ \int_{\Gamma}mdx = \int_{\Gamma}\overline{m}dx = 1 $. Next, consider the negative part $ m^- $ of $ m $ defined by $ m^-(x) = - \mathbb{1}_{\{m(x)<0\}} m(x) $. Notice that $ m^{-}\in W $ and $ m^{-}\phi\in V $, where $ \phi $ is given by Definition 1.3. Using $ m^{-}\phi $ as a test-function in (51) yields
We can see that the right hand side is non-negative. Moreover, for $ \lambda_{0} $ large enough (larger than the same constant as above, which only depends on $ b,\phi $ and $ \mu_{\alpha} $), the left hand side is non-positive. This implies that $ m^{-} = 0 $, and hence $ m\ge 0 $. Therefore, the claim is proved.
Step 3. We claim that $ T $ has a fixed point. Let us now focus on the case when $ \overline m \in K $. Using $ m\phi $ as a test function in (51) yields
Since $ H^{1}\left(0,\ell_{\alpha}\right) $ is continuously embedded in $ L^{\infty}\left(0,\ell_{\alpha}\right) $, there exists a positive constant $ C $ (independent of $ \overline m \in K $) such that
Hence, from (54), for $ \lambda_{0} $ large enough, there exists a positive constant $ C_{1} $ such that $ C_{1}\left\Vert m\right\Vert _{W}^{2}\le\lambda_{0}C\left\Vert m\right\Vert _{W} $. Thus
Therefore, $ T\left(K\right) $ is bounded in $ W $. Since the bounded subsets of $ W $ are relatively compact in $ L^{2}\left(\Gamma\right) $, $ \overline{T\left(K\right)} $ is compact in $ L^{2}\left(\Gamma\right) $. Moreover, we can see that $ K $ is closed and convex in $ L^{2}\left(\Gamma\right) $. By Schauder fixed point theorem, see [18,Corollary 11.2], $ T $ has a fixed point $ \widehat{m}\in K $ which is also a solution of (49) and $ \left\Vert \widehat{m}\right\Vert _{W}\le \lambda_{0}C/C_{1} $.
Finally, from the differential equation in (49), for all $ \alpha\in\mathcal{A} $, $ \left(\widehat{m}_{\alpha}'+b_{\alpha}\widehat{m}_{\alpha}\right)' = 0 $ on $ (0,\ell_{\alpha}) $. Hence, there exists a constant $ C_{\alpha} $ such that
It follows that $ \widehat{m}'_{\alpha}\in C( [0,\ell_{\alpha}]) $, for all $ \alpha\in\mathcal{A} $. Hence $ \widehat{m}_{\alpha}\in C^{1}([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $. Thus, $ \widehat{m}\in \mathcal{W} $.
Remark 17. Let $ m\in W $ be a solution of (49). If $ b,\partial b\in PC\left(\Gamma\right) $, standard arguments yield that $ m_{\alpha}\in C^{2} ([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $. Moreover, by Theorem 2.7, there exists a constant $ C $ which depends only on $ \left\Vert b\right\Vert _{\infty},\left\{ \left\Vert \partial b_{\alpha}\right\Vert _{\infty}\right\} _{\alpha\in\mathcal{A}} $ and $ \mu_{\alpha} $ such that $ \left\Vert m_{\alpha}\right\Vert _{C^{2}(0,\ell_{\alpha})}\le C $ for all $ \alpha\in\mathcal{A} $.
Proof of the positivity in Theorem 2.7. From (50), $ \widehat{m} $ is non-negative on $ \Gamma $. Assume by contradiction that there exists $ x_{0}\in\Gamma_{\alpha} $ for some $ \alpha\in\mathcal{A} $ such that $ \widehat{m}|_{\Gamma_{\alpha}}\left(x_{0}\right) = 0 $. Therefore, the minimum of $ \widehat{m} $ over $ \Gamma $ is achieved at $ x_{0} \in \Gamma_{\alpha} $. If $ x_{0}\in \Gamma_{\alpha}\backslash \mathcal{V} $, then $ \partial \widehat{m}(x_{0}) = 0 $. In (56), we thus have $ C_{\alpha} = 0 $, and hence $ \widehat{m}_{\alpha} $ satisfies
with $ \widehat{m}_{\alpha}\left(\pi_{\alpha} ^{-1}(x_{0})\right) = 0 $. It follows that $ \widehat m_{\alpha}\equiv 0 $ and $ \widehat{m}|_{{\Gamma_{\alpha}}}(\nu_{i}) = \widehat{m}|_{{\Gamma_{\alpha}}}(\nu_{j}) = 0 $ if $ \Gamma_\alpha = [\nu_{i},\nu_{j}] $.
Therefore, it is enough to consider $ x_{0} \in \mathcal{V} $.
Now, from Remark 1, we may assume without loss of generality that $ x_{0} = \nu_{i} $ and $ \pi_{\beta}(\nu_{i}) = 0 $ for all $ \beta \in \mathcal{A}_i $. We have the following two cases.
Case 1. if $ x_{0} = \nu_{i} $ is a transition vertex, then, since $ \widehat{m} $ belongs to $ W $, we get
This yields that $ \nu_{i} $ is also a minimum point of $ \widehat{m}|_{\Gamma_{\beta}} $ for all $ \beta\in\mathcal{A}_{i} $. Thus $ \partial_{\beta}\widehat{m}\left(\nu_{i}\right)\le0 $ for all $ \beta\in\mathcal{A}_{i} $. From the transmission condition in (49) which has a classical meaning from the regularity of $ \widehat{m} $, $ \partial_{\beta}\widehat{m}\left(\nu_{i}\right) = 0 $, since all the coefficients $ \mu_{\beta} $ are positive. From (56), for all $ \beta\in\mathcal{A}_{i} $, we have
Therefore, $ \widehat{m}'_{\beta}(y)+b_{\beta}(y)\widehat{m}_{\beta}(y) = 0 $, for all $ y\in [0,\ell_{\beta}] $ with $ \widehat{m}_\beta(0) = 0 $. This implies that $ \widehat{m}_{\beta}\equiv 0 $ for all $ \beta \in \mathcal{A}_i $. We can propagate the arguments from the vertices connected to $ \nu_i $. Since $ \Gamma $ is connected, we obtain that $ \widehat{m}\equiv 0 $ on $ \Gamma $.
Case 2. if $ x_{0} = \nu_{i} $ is a boundary vertex, then the Robin condition in (49) implies that $ \partial_{\alpha}\widehat{m}\left(\nu_{i}\right) = 0 $ since $ \mu_{\alpha} $ is positive. From (56), we have $ C_{\alpha} = 0 $. Therefore, $ \widehat{m}'_{\alpha}(y)+b_{\alpha}(y)\widehat{m}_{\alpha}(y) = 0 $, for all $ y\in [0,\ell_{\alpha}] $ with $ \widehat{m}_\alpha(0) = 0 $. This implies that $ \widehat{m}\left(\nu_{j}\right) = 0 $ where $ \nu_{j} $ is the other endpoint of $ \Gamma_{\alpha} $. We are back to Case 1, so $ \widehat{m}\equiv 0 $ on $ \Gamma $.
Finally, we have found that $ \widehat{m}\equiv 0 $ on $ \Gamma $, in contradiction with $ \int_{\Gamma}\widehat{m}dx = 1 $.
Now we prove uniqueness for (49)-(50).
Proof of uniqueness in Theorem 2.7. The proof of uniqueness is similar to the argument in [7,Proposition 13]. As in the proof of Lemma 2.3, we can prove that for $ \lambda_{0} $ large enough, there exists a constant $ C $ such that for any $ f\in V' $, there exists a unique $ w\in W $ which satisfies
and $ \left\Vert w\right\Vert _{W}\le C\left\Vert f\right\Vert _{V'} $. This allows us to define the continuous linear operator
where $ w $ is a solution of (58). Then we define $ R_{\lambda_{0}} = \mathcal{J}\circ S_{\lambda_{0}} $ where $ \mathcal{J} $ is the injection from $ W $ in $ L^{2}\left(\Gamma\right) $, which is compact. Obviously, $ R_{\lambda_{0}} $ is a compact operator from $ L^{2}\left(\Gamma\right) $ into $ L^{2}\left(\Gamma\right) $. Moreover, $ m\in W $ is a solution of (49) if and only if $ m\in\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) $. By Fredholm alternative, see [18], $ \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) = \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) $.
In order to characterize $ R_{\lambda_{0}}^{\star} $, we now consider the following boundary value problem for $ g\in L^2(\Gamma)\subset W' $:
A weak solution of (59) is a function $ v\in V $ such that
Using similar arguments as in the proof of existence in Theorem 2.7, we see that for $ \lambda_0 $ large enough and all $ g\in L^{2}\left(\Gamma\right) $, there exists a unique solution $ v\in V $ of (59). Moreover, there exists a constant $ C $ such that $ \left\Vert v\right\Vert _{V}\le C\left\Vert g\right\Vert _{L^{2}\left(\Gamma\right)} $ for all $ g\in L^{2}\left(\Gamma\right) $. This allows us to define a continuous operator
Then we define $ \tilde{R}_{\lambda_{0}} = \mathcal{I}\circ T_{\lambda_{0}} $ where $ \mathcal{I} $ is the injection from $ V $ in $ L^{2}\left(\Gamma\right) $. Since $ \mathcal{I} $ compact, $ \tilde{R}_{\lambda_{0}} $ is a compact operator from $ L^{2}\left(\Gamma\right) $ into $ L^{2}\left(\Gamma\right) $. For any $ g\in L^2(\Gamma) $, set $ v = T_{\lambda_{0}}g $. Noticing that $ \mathscr{T}_{\lambda_{0}}(v,w) = \mathcal{A}_{\lambda_{0}}(w,v) $ for all $ v\in V,w\in W $, we obtain that
Thus $ R_{\lambda_{0}}^{\star} = \tilde{R}_{\lambda_{0}} $. But $ \ker\left(Id-\lambda_{0}\tilde R_{\lambda_{0}}\right) $ is the set of solutions of (48), which, from Lemma 2.5, consists of constant functions on $ \Gamma $.
This implies that $ \dim \ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) = 1 $ and then that $ \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) = \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) = 1 $.
Finally, since the solutions $ m $ of (49) are in $ \ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) $ and satisfy the normalization condition $ \int_{\Gamma}mdx = 1 $, we obtain the desired uniqueness property in Theorem 2.7.
3.
Hamilton-Jacobi equation and the ergodic problem
3.1. The Hamilton-Jacobi equation
This section is devoted to the following boundary value problem including a Hamilton-Jacobi equation:
where $ \lambda $ is a positive constant and the Hamiltonian $ H:\Gamma\times\mathbb{R}\rightarrow\mathbb{R} $ is defined in Section 1, except that, in (60) and the whole Section 3.1 below, the Hamiltonian contains the coupling term, i.e, $ H\left(x,\partial v\right) $ in (60) plays the role of $ H\left(x,\partial v\right)-F\left(m\left(x\right)\right) $ in (24).
Definition 3.1 ● A classical solution of (60) is a function $ v\in C^{2}\left(\Gamma\right) $ which satisfies (60) pointwise.
● A weak solution of (60) is a function $ v\in V $ such that
Proposition 1. Assume that
where $ C_{2} $ is a positive constant. There exists a classical solution $ v $ of (60). Moreover, if $ H_{\alpha} $ is locally Lipschitz with respect to both variables for all $ \alpha\in\mathcal{A} $, then the solution $ v $ belongs to $ C^{2,1}\left(\Gamma\right) $.
Remark 18. Assume (61) and that $ v\in H^2(\Gamma)\subset V $ is a weak solution of (60). From the compact embedding of $ H^{2}\left(0,\ell_{\alpha}\right) $ into $ C^{1,\sigma}([0,\ell_{\alpha}]) $ for all $ \sigma\in (0,1/2) $, we get $ v\in C^{1,\sigma}(\Gamma) $. Therefore, from the differential equation in (60) $ \mu_{\alpha}\partial ^2 v_{\alpha}(\cdot) = H_\alpha(\cdot,\partial v_{\alpha}(\cdot))+\lambda v_{\alpha}(\cdot)\in C([0,\ell_{\alpha}]) $. It follows that $ v $ is a classical solution of (60).
Remark 19. Assume now that $ H $ is locally Lipschitz continuous and that $ v\in H^2(\Gamma)\subset V $ is a weak solution of (60). From Remark 18, $ v \in C^{1,\sigma}(\Gamma) $ for $ \sigma\in (0,1/2) $ and the function $ -\lambda v_{\alpha}-H_{\alpha}\left(\cdot,\partial v_{\alpha}\right) $ belongs to $ C^{0,\sigma}([0,\ell_\alpha]) $. Then, from the first line of (60), $ v \in C^{2,\sigma}(\Gamma) $. This implies that $ \partial v_\alpha\in {\rm{Lip}}[0,\ell_\alpha] $ and using the differential equation again, we see that $ v\in C^{2,1}(\Gamma) $.
Let us start with the case when $ H $ is a bounded Hamiltonian.
Lemma 3.2. Assume (61) and for some $ C_{H}>0 $,
There exists a classical solution $ v $ of (60). Moreover, if $ H_{\alpha} $ is locally Lipschitz in $ [0,\ell_\alpha]\times \mathbb R $ for all $ \alpha\in\mathcal{A} $ then the solution $ v $ belongs to $ C^{2,1}\left(\Gamma\right) $.
Proof of Lemma 3.2. For any $ u\in V $, from Lemma 2.3, the following boundary value problem:
has a unique weak solution $ v\in V $. This allows us to define the map $ T: V \longrightarrow V $ by $ T(u): = v $. Moreover, from Lemma 2.3, there exists a constant $ C $ such that
where $ |\Gamma| = \Sigma_{\alpha\in\mathcal{A}}\ell_{\alpha} $. Therefore, from the differential equation in (64),
where $ \underline{\mu}: = \min_{\alpha\in\mathcal{A}}\mu_{\alpha} $. From (65) and (66), $ T\left(V\right) $ is a bounded subset of $ H^{2}\left(\Gamma\right) $, see Definition 1.1. From the compact embedding of $ H^{2}\left(\Gamma\right) $ into $ V $, we deduce that $ \overline{T\left(V\right)} $ is a compact subset of $ V $.
Next, we claim that $ T $ is continuous from $ V $ to $ V $. Assuming that
we need to prove that $ v_{n}\rightarrow v $ in $ V $. Since $ \left\{ v_{n}\right\} $ is uniformly bounded in $ H^{2}\left(\Gamma\right) $, then, up to the extraction of a subsequence, $ v_{n}\rightarrow\widehat{v} $ in $ C^{1,\sigma}\left(\Gamma\right) $ for some $ \sigma\in (0,1/2) $. From (67), we have that $ \partial u_{n}\rightarrow\partial u $ in $ L^{2}\left(\Gamma_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $. This yields that, up to another extraction of a subsequence, $ \partial u_{n}\rightarrow\partial u $ almost everywhere in $ \Gamma_{\alpha} $. Thus $ H\left(x,\partial u_{n}\right)\rightarrow H\left(x,\partial u\right) $ in $ L^{2}\left(\Gamma_{\alpha}\right) $ by Lebesgue dominated convergence theorem. Hence, $ \widehat{v} $ is a weak solution of (64). Since the latter is unique, $ \widehat{v} = v $ and we can conclude that the whole sequence $ v_{n} $ converges to $ v $. The claim is proved.
From Schauder fixed point theorem, see [18,Corollary 11.2], $ T $ admits a fixed point which is a weak solution of (60). Moreover, recalling that $ v\in H^2(\Gamma) $, we obtain that $ v $ is a classical solution of (60) from Remark 18.
Assume now that $ H $ is locally Lipschitz. Since $ v_{\alpha}\in H^{2}\left(0,\ell_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $, we may use Remark 19 and obtain that $ v\in C^{2,1}\left(\Gamma\right) $.
Lemma 3.3. Assume (61). If $ v,u\in C^{2}\left(\Gamma\right) $ satisfy
then $ v\ge u $.
Proof of Lemma 3.3. The proof is reminiscent of an argument in [8]. Suppose by contradiction that $ \delta: = \max_\Gamma \left\{ u-v\right\} >0 $. Let $ x_{0}\in\Gamma_{\alpha} $ be a maximum point of $ u-v $. It suffices to consider the case when $ x_0 \in \mathcal{V} $, since if $ x_0 \in \Gamma \backslash \mathcal{V} $, then $ u\left(x_{0}\right)>v\left(x_{0}\right) $, $ \partial u\left(x_{0}\right) = \partial v\left(x_{0}\right) $, $ \partial^{2}u\left(x_{0}\right)\le\partial^{2}v\left(x_{0}\right) $, and we obtain a contradiction with the first line of (68).
Now consider the case when $ x_{0} = \nu_{i}\in \mathcal{V} $; from Remark 1, we can assume without restriction that $ \pi_{\alpha}\left(0\right) = \nu_{i} $. Since $ u-v $ achieves its maximum over $ \Gamma $ at $ \nu_{i} $, we obtain that $ \partial_{\beta}u\left(\nu_{i}\right) \ge\partial_{\beta}v\left(\nu_{i}\right) $, for all $ \beta\in\mathcal{A}_{i} $. From Kirchhoff conditions in (68), this implies that $ \partial_{\beta}u\left(\nu_{i}\right) = \partial_{\beta}v\left(\nu_{i}\right) $, for all $ \beta\in\mathcal{A}_{i} $. It follows that $ \partial v_{\alpha}(0) = \partial u_{\alpha}(0) $. Using the first line of (68), we get that
Therefore, $ u_{\alpha}-v_{\alpha} $ is locally strictly convex in $ [0,\ell_{\alpha}] $ near $ 0 $ and its first order derivative vanishes at $ 0 $. This contradicts the fact that $ \nu_{i} $ is the maximum point of $ u-v $.
We now turn to Proposition 1.
Proof of Proposition 1. We adapt the classical proof of Boccardo, Murat and Puel in [6]. First of all, we truncate the Hamiltonian as follows:
By Lemma 3.2, for all $ n\in\mathbb{N} $, since $ H_{n}\left(x,p\right) $ is continuous and bounded by $ C_{2}\left(1+n^{2}\right) $, there exists a classical solution $ v_{n}\in C^{2}\left(\Gamma\right) $ for the following boundary value problem
We wish to pass to the limit as $ n $ tend to $ +\infty $; we first need to estimate $ v_{n} $ uniformly in $ n $, successively in $ L^{\infty}\left(\Gamma\right) $, $ H^{1}\left(\Gamma\right) $ and $ H^{2}\left(\Gamma\right) $.
Estimate in $ L^{\infty}\left(\Gamma\right) $. Since $ \left|H_{n}\left(x,p\right)\right|\le c\left(1+\left|p\right|^{2}\right) $ for all $ x,p $, then $ \varphi = -c/\lambda $ and $ \overline{\varphi} = c/\lambda $ are respectively a sub- and super-solution of (69). Therefore, from Lemma 3.3, we obtain $ |\lambda v_n|\le c $.
Estimate in $ V $. For a positive constant $ K $ to be chosen later, we introduce $ w_{n}: = e^{Kv_{n}^{2}}v_{n}\psi \in W $, where $ \psi $ is given in Definition 1.3. Using $ w_{n} $ as a test function in (69) leads to
Since $ \left|H_{n}\left(x,p\right)\right|\le c\left(1+p^{2}\right) $, we have
where we have used Young inequalities. Since $ \lambda>0 $ and $ \psi>0 $, we deduce that
Next, choosing $ K> (1+ c^2/ 4 \underline{\mu}) /\underline{\mu} $ yields that
for a positive constant $ C $ independent of $ n $, because $ v_{n} $ is bounded by $ c/\lambda $. Since $ \psi $ is bounded from below by a positive number and $ \partial \psi $ is piecewise constant on $ \Gamma $, we infer that $ \sum_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}e^{Kv_{n}^{2}}v_{n}^{2}\left(\partial v_{n}\right)^{2} \le \widetilde C $, where $ \widetilde C $ is a positive constant independent on $ n $. Using this information and (70) again, we obtain that $ \int_{\Gamma}\left(\partial v_{n}\right)^{2} $ is bounded uniformly in $ n $. There exists a constant $ \overline{C} $ such that $ \left\Vert v_{n}\right\Vert _{V}\le\overline{C} $ for all $ n $.
Estimate in $ H^{2}\left(\Gamma\right) $. From the differential equation in (69) and (62), we have
Thus $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{1}\left(\Gamma\right) $. This and the previous estimate on $ \|\partial v_n\|_{L^2 (\Gamma)} $ yield that $ \partial v_{n} $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, from the continuous embedding of $ W^{1,1}\left(0,\ell_{\alpha}\right) $ into $ C\left(\left[0,\ell_{\alpha}\right]\right) $. Therefore, from (69), we get that $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $. This implies in particular that $ v_{n} $ is uniformly bounded in $ W^{2,\infty}\left(\Gamma\right) $.
Hence, for any $ \sigma \in (0,1) $, up to the extraction of a subsequence, there exists $ v\in V $ such that $ v_{n}\rightarrow v $ in $ C^{1,\sigma}\left(\Gamma\right) $. This yields that $ H_{n}\left(x,\partial v_{n}\right)\rightarrow H\left(x,\partial v\right) $ for all $ x\in\Gamma $. By Lebesgue's Dominated Convergence Theorem, we obtain that $ v $ is a weak solution of (60), and since $ v\in C^{1,\sigma}(\Gamma) $, by Remark 18, $ v $ is a classical solution of (60).
Assume now that $ H $ is locally Lipschitz. We may use Remark 19 and obtain that $ v\in C^{2,1}\left(\Gamma\right) $.
The proof is complete.
3.2. The ergodic problem
For $ f\in PC\left(\Gamma\right) $, we wish to prove the existence of $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $ such that
with the normalization condition
Theorem 3.4. Assume (25)-(27). There exists a unique couple $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $ satisfying (71)-(72), with $ \left|\rho\right|\le\max_{x\in\Gamma}\left|H\left(x,0\right)-f\left(x\right)\right| $. There exists a constant $ \overline{C} $ which only depends upon $ \left\Vert f\right\Vert _{L^{\infty}\left(\Gamma\right)},\mu_{\alpha} $ and the constants in (27) such that
Moreover, for some $ \sigma\in\left(0,1\right) $, if $ f_{\alpha}\in C^{0,\sigma}([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $, then $ \left(v,\rho\right)\in C^{2,\sigma}\left(\Gamma\right)\times\mathbb{R} $; there exists a constant $ \overline{C} $ which only depends upon $ \left\Vert f_{\alpha}\right\Vert _{C^{0,\sigma}\left([0,\ell_{\alpha}]\right)},\mu_{\alpha} $ and the constants in (27) such that
Proof of existence in Theorem 3.4. By Proposition 1, for any $ \lambda>0 $, the following boundary value problem
has a unique solution $ v_{\lambda}\in C^{2}\left(\Gamma\right) $. Set $ C: = \max_{\Gamma}\left|f\left(\cdot\right)-H\left(\cdot,0\right)\right| $. The constant functions $ \varphi: = - C/\lambda $ and $ \overline{\varphi} = C/\lambda $ are respectively sub- and super-solution of (75). By Lemma 3.3,
Next, set $ u_{\lambda}: = v_{\lambda}-\min_{\Gamma}v_{\lambda} $. We see that $ u_{\lambda} $ is the unique classical solution of
Before passing to the limit as $ \lambda $ tends $ 0 $, we need to estimate $ u_{\lambda} $ in $ C^{2}\left(\Gamma\right) $ uniformly with respect to $ \lambda $. We do this in two steps:
Step 1. Estimate of $ \|\partial u_{\lambda}\|_{L^{q}\left(\Gamma\right)} $. Using $ \psi $ as a test-function in (77), see Definition 1.3, and recalling that $ \lambda u_{\lambda}+\lambda\min_{\Gamma}v_{\lambda} = \lambda v_{\lambda} $, we see that
From (27) and (76),
On the other hand, since $ q>1 $, $ \psi\ge\underline{\psi}>0 $ and $ \partial \psi $ is bounded, there exists a large enough positive constant $ C' $ such that
Subtracting, we get $ \dfrac{C_0}{2}\underline{\psi}\int_{\Gamma}\left|\partial u_{\lambda}\right|^{q}dx\le\int_{\Gamma}\left(f+C+C_1\right)\psi dx+C' $. Hence, for all $ \lambda>0 $,
where $ \widetilde {C}: = \left[\left(2\int_{\Gamma}\left(|f|+C+C_1\right)\psi dx+2C'\right)/(C_0 \underline{\psi})\right]^{1/q} $.
Step 2. Estimate of $ \|u_{\lambda}\|_{C^{2}\left(\Gamma\right)} $. Since $ u_{\lambda} = v_{\lambda}-\min_{\Gamma}v_{\lambda} $, there exists $ \alpha\in \mathcal A $ and $ x_{\lambda}\in\Gamma_{\alpha} $ such that $ u_{\lambda}\left(x_{\lambda}\right) = 0 $. For all $ \lambda>0 $ and $ x\in\Gamma_{\alpha} $, we have
From (78) and the latter inequality, we deduce that $ \left\Vert u_{\lambda}|_{\Gamma_{\alpha}}\right\Vert _{L^\infty\left(\Gamma_{\alpha}\right)}\le \widetilde{C}\left|\Gamma\right|^{q/\left(q-1\right)} $. Let $ \nu_{i} $ be a transition vertex which belongs to $ \partial\Gamma_{\alpha} $. For all $ \beta\in\mathcal{A}_{i} $, $ y\in\Gamma_{\beta} $,
Since the network is connected and the number of edges is finite, repeating the argument as many times as necessary, we obtain that there exists $ M\in\mathbb{N} $ such that
This bound is uniform with respect to $ \lambda\in (0,1] $. Next, from (77) and (29), we get
Hence, from (78), $ \partial^{2}u_{\lambda} $ is bounded in $ L^{1}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $. From the continuous embedding of $ W^{1,1}\left(0,\ell_{\alpha}\right) $ in $ C([0,\ell_{\alpha}]) $, we infer that $ \partial u_{\lambda}|_{\Gamma_\alpha} $ is bounded in $ C(\Gamma_\alpha) $ uniformly with respect to $ \lambda\in (0,1] $. From the equation (77) and (76), this implies that $ u_{\lambda} $ is bounded in $ C^{2}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $.
After the extraction of a subsequence, we may assume that when $ \lambda\rightarrow0^{+} $, the sequence $ u_{\lambda} $ converges to some function $ v\in C^{1,1}\left(\Gamma\right) $ and that $ \lambda\min v_{\lambda} $ converges to some constant $ \rho $. Notice that $ v $ still satisfies the Kirchhoff conditions since $ \partial u_{\lambda}|_{\Gamma_{\alpha}}\left(\nu_{i}\right)\rightarrow\partial v|_{\Gamma_{\alpha}}\left(\nu_{i}\right) $ as $ \lambda\rightarrow0^{+} $. Passing to the limit in (77), we get that the couple $ \left(v,\rho\right) $ satisfies (71) in the weak sense, then in the classical sense by using an argument similar to Remark 18. Adding a constant to $ v $, we also get (72).
Furthermore, if for some $ \sigma\in (0,1) $, $ f|_{\Gamma_{\alpha}}\in C^{0,\sigma}\left(\Gamma_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $, a bootstrap argument using the Lipschitz continuity of $ H $ on the bounded subsets of $ \Gamma\times \mathbb R $ shows that $ u_{\lambda} $ is bounded in $ C^{2,\sigma}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $. After a further extraction of a subsequence if necessary, we obtain (74).
Proof of uniqueness in Theorem 3.4. Assume that there exist two solutions $ \left(v,\rho\right) $ and $ \left(\tilde{v},\tilde{\rho}\right) $ of (71)-(72). First of all, we claim that $ \rho = \tilde{\rho} $. By symmetry, it suffices to prove that $ \rho \ge\tilde{\rho} $. Let $ x_{0} $ be a maximum point of $ e: = \tilde{v}-v $. Using similar arguments as in the proof of Lemma 3.3, with $ \lambda v $ and $ \lambda u $ respectively replaced by $ \rho $ and $ \tilde{\rho} $, we get $ \rho\ge\tilde{\rho} $ and the claim is proved.
We now prove the uniqueness of $ v $. Since $ H_{\alpha} $ belongs to $ C^{1}\left(\Gamma_{\alpha}\times\mathbb{R}\right) $ for all $ \alpha\in\mathcal{A} $, then $ e $ is a solution of $ \mu_{\alpha}\partial^{2}e_{\alpha}-\left[\int_{0}^{1} \partial_p H_{\alpha}\left(y,\theta\partial v_{\alpha}+\left(1-\theta\right)\partial \tilde{v}_{\alpha}\right)d\theta\right]\partial e_{\alpha} = 0,\quad\text{in }(0,\ell_{\alpha}) $, with the same transmission and boundary condition as in (71). By Lemma 2.5, $ e $ is a constant function on $ \Gamma $. Moreover, from (72), we know that $ \int_{\Gamma}edx = 0 $. This yields that $ e = 0 $ on $ \Gamma $. Hence, (71)-(72) has a unique solution.
Remark 20. Since there exists a unique solution of (71)-(72), we conclude that the whole sequence $ \left(u_{\lambda},\lambda v_{\lambda}\right) $ in the proof of Theorem 3.4 converges to $ \left(v,\rho\right) $ as $ \lambda\rightarrow0 $.
4.
Proof of the main result
We first prove Theorem 1.6 when $ F $ is bounded.
Theorem 4.1. Assume (25)-(28), (30) and that $ F $ is bounded. There exists a solution $ \left(v,m,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ to the mean field games system (24). If $ F $ is locally Lipschitz continuous, then $ v\in C^{2,1}(\Gamma) $. If furthermore $ F $ is strictly increasing, then the solution is unique.
Proof of existence in Theorem 4.1. We adapt the proof of Camilli and Marchi in [7,Theorem 1]. For $ \sigma\in (0,1/2) $ let us introduce the space
which, endowed with the norm
is a Banach space. Now consider the set
and observe that $ \mathcal{K} $ is a closed and convex subset of $ \mathcal{M}_{\sigma} $. We define a map $ T:\mathcal{K}\rightarrow\mathcal{K} $ as follows: given $ m\in\mathcal{K} $, set $ f = F\left(m\right) $. By Theorem 3.4, (71)-(72) has a unique solution $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $. Next, for $ v $ given, we solve (49)-(50) with $ b\left(\cdot\right) = \partial_{p}H\left(\cdot,\partial v\left(\cdot\right)\right)\in PC(\Gamma) $. By Theorem 2.7, there exists a unique solution $ \overline{m}\in\mathcal{K}\cap W $ of (49)-(50). We set $ T\left(m\right) = \overline{m} $; we claim that $ T $ is continuous and has a precompact image. We proceed in several steps:
$ T $ is continuous. Let $ m_{n},m\in\mathcal{K} $ be such that $ \left\Vert m_{n}-m\right\Vert _{\mathcal{M}_{\sigma}}\rightarrow0 $ as $ n\rightarrow+\infty $; set $ \overline{m}_{n} = T\left(m_{n}\right),\overline{m} = T\left(m\right) $. We need to prove that $ \overline{m}_{n}\rightarrow\overline{m} $ in $ \mathcal{M}_{\sigma} $. Let $ \left(v_{n},\rho_{n}\right),\left(v,\rho\right) $ be the solutions of (71)-(72) corresponding respectively to $ f = F\left(m_{n}\right) $ and $ f = F\left(m\right) $. Using estimate (73), we see that up to the extraction of a subsequence, we may assume that $ \left(v_{n},\rho_{n}\right)\rightarrow\left(\overline{v},\overline{\rho}\right) $ in $ C^{1}\left(\Gamma\right)\times\mathbb{R} $. Since $ F\left(m_{n}\right)|_{\Gamma_\alpha}\rightarrow F\left(m\right)|_{\Gamma_\alpha} $ in $ C\left(\Gamma_\alpha\right) $, $ H_{\alpha}\left(y,(\partial v_{n})_{\alpha}\right)\rightarrow H_{\alpha}\left(y,\partial\overline{v}_{\alpha}\right) $ in $ C\left([0,\ell_{\alpha}]\right) $, and since it is possible to pass to the limit in the transmission and boundary conditions thanks to the $ C^{1} $-convergence, we obtain that $ \left(\overline{v},\overline{\rho}\right) $ is a weak (and strong by Remark 18) solution of (71)-(72). By uniqueness, $ (\overline{v},\overline{\rho}) = (v,\rho) $ and the whole sequence $ \left(v_{n},\rho_{n}\right) $ converges.
Next, $ \overline{m}_{n} = T\left(m_{n}\right),\overline{m} = T\left(m\right) $ are respectively the solutions of (49)-(50) corresponding to $ b = \partial_{p}H\left(x,\partial v_{n}\right) $ and $ b = \partial_{p}H\left(x,\partial v\right) $. From the estimate (53), since $ \partial_{p}H\left(x,\partial v_{n}\right) $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, we see that $ \overline{m}_{n} $ is uniformly bounded in $ W $. Therefore, up to the extraction of subsequence, $ \overline{m}_{n}\rightharpoonup\widehat{m} $ in $ W $ and $ \overline{m}_{n}\rightarrow\widehat{m} $ in $ \mathcal{M}_\sigma $, because $ W $ is compactly embedded in $ \mathcal{M}_\sigma $ for $ \sigma\in (0,1/2) $. It is easy to pass to the limit and find that $ \widehat{m} $ is a solution of (49)-(50) with $ b = \partial_{p}H\left(x,\partial v\right) $. From Theorem 2.7, we obtain that $ \overline{m} = \widehat{m} $, and hence the whole sequence $ \overline{m}_{n} $ converges to $ \overline{m} $.
The image of $ T $ is precompact. Since $ F\in C^{0}\left(\mathbb{R}^{+};\mathbb{R}\right) $ is a uniformly bounded function, we see that $ F\left(m\right) $ is bounded in $ L^{\infty}\left(\Gamma\right) $ uniformly with respect to $ m\in \mathcal{K} $. From Theorem 3.4, there exists a constant $ \overline{C} $ such that for all $ m\in\mathcal{K} $, the unique solution $ v $ of (71)-(72) with $ f = F(m) $ satisfies $ \left\Vert v\right\Vert _{C^{2}\left(\Gamma\right)}\le\overline{C} $. From Theorem 2.7, we obtain that $ \overline{m} = T\left(m\right) $ is bounded in $ W $ by a constant independent of $ m $. Since $ W $ is compactly embedded in $ \mathcal{M}_\sigma $, for $ \sigma\in (0,1/2) $ we deduce that $ T $ has a precompact image.
End of the proof. We can apply Schauder fixed point theorem (see [18,Corollary 11.2]) to conclude that the map $ T $ admits a fixed point $ m $. By Theorem 2.7, we get $ m\in \mathcal{W} $. Hence, there exists a solution $ (v,m,\rho)\in C^2(\Gamma)\times \mathcal{W}\times \mathbb{R} $ to the mean field games system (24). If $ F $ is locally Lipschitz continuous, then $ v\in C^{2, 1}(\Gamma) $ from the final part of Theorem 3.4.
Proof of uniqueness in Theorem 4.1. We assume that $ F $ is strictly increasing and that there exist two solutions $ \left(v_{1},m_{1},\rho_{1}\right) $ and $ \left(v_{2},m_{2},\rho_{2}\right) $ of (24). We set $ \overline{v} = v_{1}-v_{2},\overline{m} = m_{1}-m_{2} $ and $ \overline{\rho} = \rho_{1}-\rho_{2} $ and write the equations for $ \overline{v},\overline{m} $ and $ \overline{\rho} $
Multiplying the equation for $ \overline{v} $ by $ \overline{m} $ and integrating over $ \Gamma_{\alpha} $, we get
Multiplying the equation for $ \overline{m} $ by $ \overline{v} $ and integrating over $ \Gamma_{\alpha} $, we get
Subtracting (80) to (81), summing over $ \alpha\in\mathcal{A} $, assembling the terms corresponding to a same vertex $ \nu_{i} $ and taking into account the transmission and the normalization condition for $ \overline{v} $ and $ \overline{m} $, we obtain
Since $ F $ is strictly monotone then the first sum is non-negative. Moreover, by the convexity of $ H $ and the positivity of $ m_{1},m_{2} $, the last two sums are non-negative. Therefore, we have that $ m_{1} = m_{2} $. From Theorem 3.4, we finally obtain $ v_{1} = v_{2} $ and $ \rho_{1} = \rho_{2} $.
Proof of Theorem 1.6 for a general coupling $ F $. We only need to modify the proof of existence.
We now truncate the coupling function as follows:
Then $ F_{n} $ is continuous, bounded below by $ -M $ as in (31) and bounded above by some constant $ C_n $. By Theorem 4.1, for all $ n\in\mathbb{N} $, there exists a unique solution $ \left(v_{n},m_{n},\rho_{n}\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ of the mean field game system (24) where $ F $ is replaced by $ F_{n} $. We wish to pass to the limit as $ n\to +\infty $. We proceed in several steps:
Step 1. $ \rho_{n} $ is bounded from below. Multiplying the HJB equation in (24) by $ m_{n} $ and the Fokker-Planck equation in (24) by $ v_{n} $, using integration by parts and the transmission conditions, we obtain that
and
Subtracting the two equations, we obtain
In what follows, the constant $ C $ may vary from line to line but remains independent of $ n $. From (26), we see that $ \partial_{p}H\left(x,\partial v_{n}\right)\partial v_{n}-H\left(x,\partial v_{n}\right)\ge -H(x,0)\ge -C $. Therefore
Hence, since $ F_n+M\ge0 $ and $ \int_{\Gamma}m_n dx = 1 $, we get that $ \rho_{n} $ is bounded from below by $ -M-C $ independently of $ n $.
Step 2. $ \rho_{n} $ and $ \int_{\Gamma}F_{n}\left(m_{n}\right)dx $ are uniformly bounded. By Theorem 2.7, there exists a positive solution $ w\in W $ of (49)-(50) with $ b = 0 $. It yields
Multiplying the HJB equation of (24) by $ w $, using integration by parts and the Kirchhoff condition, we get
This implies, using (27), (53) and $ F_n+M\ge0 $,
Thus, by (85), we have
Let $ K>0 $ be a constant to be chosen later. We have
where $ C_K $ is independent of $ n $. Choosing $ K = 2C $ where $ C $ is the constant in (87), we get by combining (88) with (87) that $ \int_{\Gamma}F_n(m_n)m_n\le C $. Using (88) again, we obtain $ \int_{\Gamma}F_n(m_n)dx\le C $. Hence, from (87), we conclude that $ |\rho_n|+ |\int_{\Gamma}F_n(m_n)dx|\le C $.
Step 3. Prove that $ F_{n}\left(m_{n}\right) $ is uniformly integrable and $ v_{n} $ and $ m_{n} $ are uniformly bounded respectively in $ C^{1}\left(\Gamma\right) $ and $ W $. Let $ E $ be measurable with $ |E| = \eta $. By (88) with $ \Gamma $ is replaced by $ E $, we have
since $ \int_{E} F_n(m_n) m_n dx\le C $ and $ \sup_{0\le r \le K}F_{n}(r)\le \sup_{0\le r \le K}F(r): = C_K $. Therefore, for all $ \varepsilon>0 $, we may choose $ K $ such that $ (C+M)/K\le\varepsilon/2 $ and then $ \eta $ such that $ C_K \eta\le\varepsilon/2 $ and get
which proves the uniform integrability of $ \left\{F_n (m_n) \right\}_n $.
Next, since $ \rho_{n} $ and $ \int_{\Gamma}F_{n}\left(m_{n}\right)dx $ are uniformly bounded, we infer from (86) that $ \partial v_{n} $ is uniformly bounded in $ L^{q}\left(\Gamma\right) $. Since by the condition $ \int_{\Gamma}v_ndx = 0 $, there exists $ \overline{x}_n $ such that $ v_n(\overline{x}_n) = 0 $, we infer from the latter bound that $ v_n $ is uniformly bounded in $ L^{\infty}(\Gamma) $.
Using the HJB equation in (24) and Remark 7, we get
We obtain that $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{1}\left(\Gamma\right) $, which implies that $ v_n $ is uniformly bounded in $ C^1(\Gamma) $. Therefore the sequence of functions $ C_q (|\partial v_n|^q+1)+|F_n(m_n)|+|\rho_n| $ is uniformly integrable, and so is $ \partial^{2}v_{n} $. This implies that $ \partial v_{n} $ is equicontinuous. Hence, $ \left\{ v_{n}\right\} $ is relatively compact in $ C^{1}\left(\Gamma\right) $ by Arzelà-Ascoli's theorem. Finally, from the Fokker-Planck equation and Theorem 2.7, since $ \partial_{p}H\left(x,\partial v_{n}\right) $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, we obtain that $ m_{n} $ is uniformly bounded in $ W $.
Step 4. Passage to the limit
From Step 1 and 2, since $ \left\{ \rho_{n}\right\} $ is uniformly bounded, there exists $ \rho\in\mathbb{R} $ such that $ \rho_{n}\rightarrow\rho $ up to the extraction of subsequence. From Step 3, there exists $ m\in W $ such that $ m_{n}\rightharpoonup m $ in $ W $ and $ m_{n}\to m $ almost everywhere, up to the extraction of subsequence. Also from Step 3, since $ F_{n}\left(m_{n}\right) $ is uniformly integrable, from Vitali theorem, $ \lim_{n\rightarrow\infty}\int_{\Gamma}F_{n}\left(m_{n}\right)\tilde{w}dx = \int_{\Gamma}F\left(m\right)\tilde{w}dx,\quad\text{for all }\tilde{w}\in W $. From Step 3, up to the extraction of subsequence, there exists $ v\in C^{1}\left(\Gamma\right) $ such that $ v_{n}\rightarrow v $ in $ C^{1}\left(\Gamma\right) $. Hence, $ \left(v,\rho,m\right) $ satisfies the weak form of the MFG system:
and
Finally, we prove the regularity for the solution of (24). Since $ m\in W $, $ m|_{\Gamma_\alpha}\in C^{0,\sigma} $ for some constant $ \sigma\in (0,1/2) $ and all $ \alpha\in \mathcal A $. By Theorem 3.4, $ v\in C^2(\Gamma) $ ($ v\in C^{2,\sigma}(\Gamma) $ if $ F $ is locally Lipschitz continuous). Then, by Theorem 2.7, we see that $ m\in \mathcal{W} $. If $ F $ is locally Lipschitz continuous, this implies that $ v\in C^{2,1}(\Gamma) $. We also obtain that $ v $ and $ m $ satisfy the Kirchhoff and transmission conditions in (24). The proof is done.