A class of infinite horizon mean field games on networks

  • Received: 01 July 2018 Revised: 01 January 2019
  • Primary: 35R02, 49N70; Secondary: 91A13

  • We consider stochastic mean field games for which the state space is a network. In the ergodic case, they are described by a system coupling a Hamilton-Jacobi-Bellman equation and a Fokker-Planck equation, whose unknowns are the invariant measure $ m $, a value function $ u $, and the ergodic constant $ \rho $. The function $ u $ is continuous and satisfies general Kirchhoff conditions at the vertices. The invariant measure $ m $ satisfies dual transmission conditions: in particular, $ m $ is discontinuous across the vertices in general, and the values of $ m $ on each side of the vertices satisfy special compatibility conditions. Existence and uniqueness are proven under suitable assumptions.

    Citation: Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou. A class of infinite horizon mean field games on networks[J]. Networks and Heterogeneous Media, 2019, 14(3): 537-566. doi: 10.3934/nhm.2019021

    Related Papers:

    [1] Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou . A class of infinite horizon mean field games on networks. Networks and Heterogeneous Media, 2019, 14(3): 537-566. doi: 10.3934/nhm.2019021
    [2] Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, Alessio Porretta . Long time average of mean field games. Networks and Heterogeneous Media, 2012, 7(2): 279-301. doi: 10.3934/nhm.2012.7.279
    [3] Diogo A. Gomes, Gabriel E. Pires, Héctor Sánchez-Morgado . A-priori estimates for stationary mean-field games. Networks and Heterogeneous Media, 2012, 7(2): 303-314. doi: 10.3934/nhm.2012.7.303
    [4] Fabio Camilli, Francisco Silva . A semi-discrete approximation for a first order mean field game problem. Networks and Heterogeneous Media, 2012, 7(2): 263-277. doi: 10.3934/nhm.2012.7.263
    [5] Diogo A. Gomes, Hiroyoshi Mitake, Kengo Terai . The selection problem for some first-order stationary Mean-field games. Networks and Heterogeneous Media, 2020, 15(4): 681-710. doi: 10.3934/nhm.2020019
    [6] Fabio Camilli, Italo Capuzzo Dolcetta, Maurizio Falcone . Preface. Networks and Heterogeneous Media, 2012, 7(2): i-ii. doi: 10.3934/nhm.2012.7.2i
    [7] Olivier Guéant . New numerical methods for mean field games with quadratic costs. Networks and Heterogeneous Media, 2012, 7(2): 315-336. doi: 10.3934/nhm.2012.7.315
    [8] Martino Bardi . Explicit solutions of some linear-quadratic mean field games. Networks and Heterogeneous Media, 2012, 7(2): 243-261. doi: 10.3934/nhm.2012.7.243
    [9] Yves Achdou, Victor Perez . Iterative strategies for solving linearized discrete mean field games systems. Networks and Heterogeneous Media, 2012, 7(2): 197-217. doi: 10.3934/nhm.2012.7.197
    [10] Joachim von Below, José A. Lubary . Isospectral infinite graphs and networks and infinite eigenvalue multiplicities. Networks and Heterogeneous Media, 2009, 4(3): 453-468. doi: 10.3934/nhm.2009.4.453
  • We consider stochastic mean field games for which the state space is a network. In the ergodic case, they are described by a system coupling a Hamilton-Jacobi-Bellman equation and a Fokker-Planck equation, whose unknowns are the invariant measure $ m $, a value function $ u $, and the ergodic constant $ \rho $. The function $ u $ is continuous and satisfies general Kirchhoff conditions at the vertices. The invariant measure $ m $ satisfies dual transmission conditions: in particular, $ m $ is discontinuous across the vertices in general, and the values of $ m $ on each side of the vertices satisfy special compatibility conditions. Existence and uniqueness are proven under suitable assumptions.



    Recently, an important research activity on mean field games (MFGs for short) has been initiated since the pioneering works [28,29,30] of Lasry and Lions (related ideas have been developed independently in the engineering literature by Huang-Caines-Malhamé, see for example [23,24,25]): it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $ N $ of agents tends to infinity. In these models, it is assumed that the agents are all identical and that an individual agent can hardly influence the outcome of the game. Moreover, each individual strategy is influenced by some averages of functions of the states of the other agents. In the limit when $ N\to +\infty $, a given agent feels the presence of the others through the statistical distribution of the states. Since perturbations of the strategy of a single agent do not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. The delicate question of the passage to the limit is one of the main topics of the book of Carmona and Delarue, [10]. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward in time Kolmogorov or Fokker-Planck (FP) equation and a backward Hamilton-Jacobi-Bellman (HJB) equation. The unknown of this system is a pair of functions: the value function of the stochastic optimal control problem solved by a representative agent and the density of the distribution of states. In the infinite horizon limit, one obtains a system of two stationary PDEs.

    A very nice introduction to the theory of MFGs is supplied in the notes of Cardaliaguet [9]. Theoretical results on the existence of classical solutions to the previously mentioned system of PDEs can be found in [19,20,21,28,29,30]. Weak solutions have been studied in [5,30,33,34]. The numerical approximation of these systems of PDEs has been discussed in [1,3,5].

    A network (or a graph) is a set of items, referred to as vertices (or nodes or crosspoints), with connections between them referred to as edges. In the recent years, there has been an increasing interest in the investigation of dynamical systems and differential equations on networks, in particular in connection with problems of data transmission and traffic management (see for example [12,14,17]). The literature on optimal control in which the state variable takes its values on a network is recent: deterministic control problems and related Hamilton-Jacobi equations were studied in [2,4,26,27,31,32]. Stochastic processes on networks and related Kirchhoff conditions at the vertices were studied in [15,16].

    The present work is devoted to infinite horizon stochastic mean field games taking place on networks. The most important difficulty will be to deal with the transition conditions at the vertices. The latter are obtained from the theory of stochastic control in [15,16], see Section 1.3 below. In [7], the first article on MFGs on networks, Camilli and Marchi consider a particular type of Kirchhoff condition at the vertices for the value function: this condition comes from an assumption which can be informally stated as follows: consider a vertex $ \nu $ of the network and assume that it is the intersection of $ p $ edges $ \Gamma_{1},\dots, \Gamma_{p} $, ; if, at time $ \tau $, the controlled stochastic process $ X_t $ associated to a given agent hits $ \nu $, then the probability that $ X_{\tau^+} $ belongs to $ \Gamma_i $ is proportional to the diffusion coefficient in $ \Gamma_i $. Under this assumption, it can be seen that the density of the distribution of states is continuous at the vertices of the network. In the present work, the above mentioned assumption is not made any longer. Therefore, it will be seen below that the value function satisfies more general Kirchhoff conditions, and accordingly, that the density of the distribution of states is no longer continuous at the vertices; the continuity condition is then replaced by suitable compatibility conditions on the jumps across the vertex. Moreover, as it will be explained in Remark 11 below, more general assumptions on the coupling costs will be made. Mean field games on networks with finite horizon will be considered in a forthcoming paper.

    After obtaining the transmission conditions at the vertices for both the value function and the density, we shall prove existence and uniqueness of weak solutions of the uncoupled HJB and FP equations (in suitable Sobolev spaces). We have chosen to work with weak solutions because it is a convenient way to deal with existence and uniqueness in the stationary regime, but also because it is difficult to avoid it in the nonstationary case, see the forthcoming work on finite horizon MFGs. Classical arguments will then lead to the regularity of the solutions. Next, we shall establish the existence result for the MFG system by a fixed point argument and a truncation technique. Uniqueness will also be proved under suitable assumptions.

    The present work is organized as follows: the remainder of Section 1 is devoted to setting the problem and obtaining the system of differential equations and the transmission conditions at the vertices. Section 2 contains useful results, first about some linear boundary value problems with elliptic equations, then on a pair of linear Kolmogorov and Fokker-Planck equations in duality. By and large, the existence of weak solutions is obtained by applying Banach-Necas-Babuška theorem to a special pair of Sobolev spaces referred to as $ V $ and $ W $ below and Fredholm's alternative, and uniqueness comes from a maximum principle. Section 3 is devoted to the HJB equation associated with an ergodic problem. Finally, the proofs of the main results of existence and uniqueness for the MFG system of differential equations are completed in Section 1.

    A bounded network $ \Gamma $ (or a bounded connected graph) is a connected subset of $ \mathbb R ^n $ made of a finite number of bounded non-intersecting straight segments, referred to as edges, which connect nodes referred to as vertices. The finite collection of vertices and the finite set of closed edges are respectively denoted by $ \mathcal{V}: = \left\{ \nu_{i}, i\in I\right\} $ and $ \mathcal{E}: = \left\{ \Gamma_{\alpha}, \alpha\in\mathcal{A}\right\} $, where $ I $ and $ A $ are finite sets of indices contained in $ \mathbb N $. We assume that for $ \alpha,\beta \in \mathcal A $, if $ \alpha\not = \beta $, then $ \Gamma_\alpha\cap \Gamma_\beta $ is either empty or made of a single vertex. The length of $ \Gamma_{\alpha} $ is denoted by $ \ell_{\alpha} $. Given $ \nu_{i}\in\mathcal{V} $, the set of indices of edges that are adjacent to the vertex $ \nu_{i} $ is denoted by $ \mathcal{A}_{i} = \left\{ \alpha\in\mathcal{A}:\nu_{i}\in\Gamma_{\alpha}\right\} $. A vertex $ \nu_{i} $ is named a boundary vertex if $ \sharp\left(\mathcal{A}_{i}\right) = 1 $, otherwise it is named a transition vertex. The set containing all the boundary vertices is named the boundary of the network and is denoted by $ \partial\Gamma $ hereafter.

    The edges $ \Gamma_{\alpha}\in\mathcal{E} $ are oriented in an arbitrary manner. In most of what follows, we shall make the following arbitrary choice that an edge $ \Gamma_{\alpha}\in\mathcal{E} $ connecting two vertices $ \nu_i $ and $ \nu_j $, with $ i<j $ is oriented from $ \nu_i $ toward $ \nu_j $: this induces a natural parametrization $ \pi_\alpha: [0,\ell_\alpha]\to \Gamma_\alpha = [\nu_i,\nu_j] $:

    $ πα(y)=(αy)νi+yνjfor y[0,α].
    $
    (1)

    For a function $ v:\Gamma\rightarrow\mathbb{R} $ and $ \alpha\in\mathcal{A} $, we define $ v_\alpha: (0,\ell_\alpha)\rightarrow\mathbb{R} $ by

    $ v_{\alpha} (y): = v\circ\pi_{\alpha} (y),\quad \hbox{ for all }y\in (0,\ell_\alpha). $

    The function $ v_\alpha $ is a priori defined only in $ (0,\ell_\alpha) $. When it is possible, we extend it by continuity at the boundary by setting

    $ vα(0):=limy0+vα(y) and vα(α):=limyαvα(y).
    $

    In that latter case, we can define

    $ v|Γα(x)={vα(π1α(x)),if xΓαV,vα(0)=limy0+vα(y),if x=νi,vα(α)=limyαvα(y),if x=νj.
    $
    (2)

    Notice that $ v|_{\Gamma_{\alpha}} $ does not coincide with the original function $ v $ at the vertices in general when $ v $ is not continuous.

    Remark 1. In what precedes, the edges have been arbitrarily oriented from the vertex with the smaller index toward the vertex with the larger one. Other choices are of course possible. In particular, by possibly dividing a single edge into two, adding thereby new artificial vertices, it is always possible to assume that for all vertices $ \nu_i\in\mathcal{V} $,

    $ either πα(0)=νi, for all αAi or πα(α)=νi, for all αAi.
    $
    (3)

    This idea was used by Von Below in [35]: some edges of $ \Gamma $ are cut into two by adding artificial vertices so that the new oriented network $ \overline{\Gamma} $ has the property (3), see Figure 1 for an example.

    Figure 1. 

    Left: the network $ \Gamma $ in which the edges are oriented toward the vertex with larger index ($ 4 $ vertices and $ 4 $ edges). Right: a new network $ \tilde \Gamma $ obtained by adding an artificial vertex ($ 5 $ vertices and $ 5 $ edges): the oriented edges sharing a given vertex $ \nu $ either have all their starting point equal $ \nu $, or have all their terminal point equal $ \nu $

    .

    In Sections 1.2 and 1.3 below, especially when dealing with stochastic calculus, it will be convenient to assume that property (3) holds. In the remaining part of the paper, it will be convenient to work with the original network, i.e., without the additional artificial vertices and with the orientation of the edges that has been chosen initially.

    The set of continuous functions on $ \Gamma $ is denoted by $ C(\Gamma) $ and we set

    $ \begin{array}[c]{ll} &PC\left(\Gamma\right) \\ = & \left\{ v : \Gamma \to  \mathbb R \;:  \hbox{  for all $\alpha\in\mathcal{A}$}, \left|     \begin{array}[c]{l} \hbox{$v_\alpha\in C(0,\ell_\alpha)$ }\\   \hbox{ $v_\alpha$ can be extended by continuity to $[0,\ell_\alpha]$.}     \end{array}
    \right. \right\}. \end{array} $

    By the definition of piecewise continuous functions $ v\in PC(\Gamma) $, for all $ \alpha\in \mathcal{A} $, it is possible to define $ v|_{\Gamma_\alpha} $ by (2) and we have $ v|_{\Gamma_\alpha}\in C(\Gamma_\alpha) $, $ v_\alpha\in C([0,\ell_{\alpha}]) $.

    For $ m\in\mathbb{N} $, the space of $ m $-times continuously differentiable functions on $ \Gamma $ is defined by

    $ C^{m}\left(\Gamma\right): = \left\{ v\in C\left(\Gamma\right):v_{\alpha}\in C^{m}\left(\left[0,\ell_{\alpha}\right]\right)\text{ for all }\alpha\in\mathcal{A}\right\}. $

    Notice that $ v\in C^{m}\left(\Gamma\right) $ is assumed to be continuous on $ \Gamma $, and that its restriction $ v_{|\Gamma_\alpha} $ to each edge $ \Gamma_\alpha $ belongs to $ C^m(\Gamma_\alpha) $. The space $ C^{m}\left(\Gamma\right) $ is endowed with the norm $ \left\Vert v\right\Vert _{C^{m}\left(\Gamma\right)}: = {\sum}_{\alpha\in\mathcal{A}}{\sum}_{k\le m}\left\Vert \partial^{k}v_{\alpha}\right\Vert _{L^{\infty}\left(0,\ell_{\alpha}\right)} $. For $ \sigma\in\left(0,1\right) $, the space $ C^{m,\sigma}\left(\Gamma\right) $, contains the functions $ v\in C^{m}\left(\Gamma\right) $ such that $ \partial^{m}v_{\alpha}\in C^{0,\sigma}\left(\left[0,\ell_{\alpha}\right]\right) $ for all $ \alpha\in \mathcal{A} $; it is endowed with the norm $ { \left\Vert v\right\Vert _{C^{m,\sigma}\left(\Gamma\right)}: = \left\Vert v\right\Vert _{C^{m}\left(\Gamma\right)}+\sup\limits_{\alpha\in\mathcal{A}}\sup\limits_{y\ne z \atop y,z\in\left[0,\ell_{\alpha}\right]}\dfrac{\left|\partial^{m}v_{\alpha}\left(y\right)-\partial^{m}v_{\alpha}\left(z\right)\right|}{\left|y-z\right|^{\sigma}}} $.

    For a positive integer $ m $ and a function $ v\in C^{m}\left(\Gamma\right) $, we set for $ k\le m $,

    $ kv(x)=kvα(π1α(x)) if xΓαV.
    $
    (4)

    For a vertex $ \nu $, we define $ \partial_{\alpha}v\left(\nu\right) $ as the outward directional derivative of $ v|_{\Gamma_{\alpha}} $ at $ \nu $ as follows:

    $ αv(ν):={limh0+vα(0)vα(h)h,if ν=πα(0),limh0+vα(α)vα(αh)h,if ν=πα(α).
    $
    (5)

    For all $ i\in I $ and $ \alpha\in \mathcal A_i $, setting

    $ niα={1if νi=πα(α),1if νi=πα(0),
    $
    (6)

    we have

    $ αv(νi)=niαv|Γα(νi)=niαvα(π1α(νi)).
    $
    (7)

    Remark 2. Changing the orientation of the edge does not change the value of $ \partial_\alpha v(\nu) $ in (5).

    If for all $ \alpha\in\mathcal{A} $, $ v_{\alpha} $ is Lebesgue-integrable on $ (0,\ell_{\alpha}) $, then the integral of $ v $ on $ \Gamma $ is defined by $ \int_{\Gamma}v\left(x\right)dx = \sum_{\alpha\in\mathcal{A}}\int_{0}^{\ell_{\alpha}}v_{\alpha}\left(y\right)dy $. The space

    $ L^{p}\left(\Gamma\right) = \left\{ v:v|_{\Gamma_{\alpha}}\in L^{p}\left(\Gamma_{\alpha}\right)\text{ for all }{\alpha\in\mathcal{A}}\right\}, $

    $ p\in [1,\infty] $, is endowed with the norm $ \left\Vert v\right\Vert _{L^{p}\left(\Gamma\right)}: = \left( \sum_{\alpha\in\mathcal{A}}\left\Vert v_{\alpha}\right\Vert _{L^{p} \left(0,\ell_{\alpha}\right)}^p \right)^{\frac 1 p} $ if $ 1\le p<\infty $, and $ \max_{\alpha\in \mathcal A} \|v_\alpha\|_{L^\infty\left(0,\ell_{\alpha}\right)} $ if $ p = +\infty $. We shall also need to deal with functions on $ \Gamma $ whose restrictions to the edges are weakly-differentiable: we shall use the same notations for the weak derivatives. Let us introduce Sobolev spaces on $ \Gamma $:

    Definition 1.1. For any integer $ s\ge 1 $ and any real number $ p\ge 1 $, the Sobolev space $ W^{s,p}(\Gamma) $ is defined as follows: $ W^{s,p}(\Gamma): = \left\{ v\in C\left(\Gamma\right):v_{\alpha}\in W^{s,p}\left(0,\ell_{\alpha}\right) \;\forall \alpha\in\mathcal{A}\right\} $, and endowed with the norm $ \left\Vert v\right\Vert _{W^{s,p}\left(\Gamma\right)} = \left(\sum\limits^{s}_{k = 1}\sum\limits_{\alpha\in\mathcal{A}} \left\Vert \partial^{k}v_{\alpha}\right\Vert _{L^{p}\left(0,\ell_{\alpha}\right)}^{p}+ \left\Vert v\right\Vert _{L^p(\Gamma)}^{p}\right)^{\frac 1 p} $. We also set $ H^s(\Gamma) = W^{s,2}(\Gamma) $.

    After rescaling the edges, it may be assumed that $ \ell_{\alpha} = 1 $ for all $ \alpha\in\mathcal{A} $. Let $ \mu_{\alpha} , \alpha\in\mathcal{A} $ and $ p_{i\alpha}, i\in I,\alpha\in\mathcal{A}_{i} $ be positive constants such that $ \sum_{\alpha\in\mathcal{A}_{i}}p_{i\alpha} = 1 $. Consider also a real valued function $ a\in PC(\Gamma) $ such that for all $ \alpha \in \mathcal A $, $ a|_{\Gamma{_\alpha}} $ is Lipschitz continuous.

    As in Remark 1, we make the assumption (3) by possibly adding artificial nodes: if $ \nu_i $ is such an artificial node, then $ \sharp( \mathcal A_i) = 2 $, and we assume that $ p_{i\alpha} = 1/2 $ for $ \alpha\in \mathcal A_i $. The diffusion parameter $ \mu $ has the same value on the two sides of an artificial vertex. Similarly, the function $ a $ does not have jumps across an artificial vertex.

    Let us consider the linear differential operator:

    $ Lu(x)=Lαu(x):=μα2u(x)+a|Γα(x)u(x),if xΓα,
    $
    (8)

    with domain

    $ D(L):={uC2(Γ):αAipiααu(νi)=0, for all iI}.
    $
    (9)

    Remark 3. Note that in the definition of $ D\left(\mathcal{L}\right) $, the condition at boundary vertices boils down to a Neumann condition.

    Freidlin and Sheu proved in [15] that

    1. The operator $ \mathcal L $ is the infinitesimal generator of a Feller-Markov process on $ \Gamma $ with continuous sample paths. The operators $ \mathcal L_\alpha $ and the transmission conditions at the vertices

    $ αAipiααu(νi)=0
    $
    (10)

    define such a process in a unique way, see also [16,Theorem 3.1]. The process can be written $ (X_t, \alpha_t) $ where $ X_t\in \Gamma_{\alpha_t} $. If $ X_t = \nu_i $, $ i\in I $, $ \alpha_t $ is arbitrarily chosen as the smallest index in $ \mathcal A_i $. Setting $ x_t = \pi_{\alpha_t}(X_t) $ defines the process $ x_t $ with values in $ [0,1] $.

    2.There exist

    (a) a one dimensional Wiener process $ W_t $,

    (b) continuous non-decreasing processes $ \ell_{i,t} $, $ i\in I $, which are measurable with respect to the $ \sigma $-field generated by $ (X_t,\alpha_t) $,

    (c) continuous non-increasing processes $ h_{i,t} $, $ i\in I $, which are measurable with respect to the $ \sigma $-field generated by $ (X_t,\alpha_t) $,

    such that

    $ dxt=2μαtdWt+aαt(xt)dt+di,t+dhi,t,i,t increases only when Xt=νi and xt=0,hi,t decreases only when Xt=νi and xt=1.
    $
    (11)

    3. The following Ito formula holds: for any real valued function $ u \in C^2(\Gamma) $:

    $ u(Xt)=u(X0)+αAt01{XsΓαV}(μα2u(Xs)+a(Xs)u(Xs)ds+2μαu(Xs)dWs)+iIαAipiααu(νi)(i,t+hi,t).
    $
    (12)

    Remark 4. The assumption that all the edges have unit length is not restrictive, because we can always rescale the constants $ \mu_\alpha $ and the piecewise continuous function $ a $. The Ito formula in (12) holds when this assumption is not satisfied.

    Consider the invariant measure associated with the process $ X_t $. We may assume that it is absolutely continuous with respect to the Lebesgue measure on $ \Gamma $. Let $ m $ be its density:

    $ E[u(Xt)]:=Γu(x)m(x)dx, for all uPC(Γ).
    $
    (13)

    We focus on functions $ u\in D\left(\mathcal{L}\right) $. Taking the time-derivative of each member of (13), Ito's formula (12) and (10) lead to $ \mathbb{E}\left[ \mathbb{1}_{\{X_t\notin \mathcal V\}} \left(a\partial u (X_t)+\mu\partial^{2}u(X_t)\right) \right] = 0 $. This implies that

    $ Γ(a(x)u(x)+μ2u(x))m(x)dx=0.
    $
    (14)

    Since for $ \alpha \in \mathcal A $, any smooth function on $ \Gamma $ compactly supported in $ \Gamma_\alpha \backslash \mathcal V $ clearly belongs to $ D({\mathcal L}) $, (14) implies that $ m $ satisfies

    $ μα2m+(ma)=0
    $
    (15)

    in the sense of distributions in the edges $ \Gamma_\alpha \backslash \mathcal V $, $ \alpha\in \mathcal A $. This implies that there exists a real number $ c_\alpha $ such that

    $ μαm|Γα=m|Γαa|Γα+cα.
    $
    (16)

    So $ m|_{\Gamma_\alpha} $ is $ C^1 $ regular, and (16) is true pointwise. Using this information and recalling (14), we find that, for all $ u\in D( \mathcal L) $,

    $ iIαAiμαm|Γα(νi)αu(νi)+βAΓβu|Γβ(x)(μβm|Γβ(x)+a|Γβ(x)m|Γβ(x))dx=0.
    $

    This and (16) imply that

    $ iIαAiμαm|Γα(νi)αu(νi)+βAcβΓβu|Γβ(x)dx=0.
    $
    (17)

    For all $ i\in I $, it is possible to choose a function $ u\in D( \mathcal L) $ such that

    1. $ u(\nu_j) = \delta_{i,j} $ for all $ j\in I $;

    2. $ \partial_\alpha u(\nu_j) = 0 $ for all $ j\in I $ and $ \alpha\in \mathcal A_j $.

    Using such a test-function in (17) implies that for all $ i\in I $,

    $ 0=βAcβΓβu|Γβ(x)dx=jIαAjcαnjαu|Γα(νj)=αAiniαcα,
    $
    (18)

    where $ n_{i\alpha} $ is defined in (6).

    For all $ i\in I $ and $ \alpha,\beta\in \mathcal A_i $, it is possible to choose a function $ u\in D( \mathcal L) $ such that

    1. $ u $ takes the same value at each vertex of $ \Gamma $, thus $ \int_{\Gamma_\delta} \partial u|_{\Gamma_\delta}(x)dx = 0 $ for all $ \delta\in \mathcal A $;

    2. $ \partial_\alpha u(\nu_i) = 1/p_{i\alpha} $, $ \partial_\beta u(\nu_i) = -1/p_{i\beta} $ and all the other first order directional derivatives of $ u $ at the vertices are $ 0 $.

    Using such a test-function in (17) yields

    $ \dfrac{m|_{\Gamma_{\alpha}}\left(\nu_{i}\right)}{\gamma_{i\alpha}} = \dfrac{m|_{\Gamma_{\beta}}\left(\nu_{i}\right)}{\gamma_{i\beta}},\quad\text{for all }\alpha,\beta\in\mathcal{A}_{i},\nu_{i}\in\mathcal{V}, $

    in which

    $ γiα=piαμα,for all iI,αAi.
    $
    (19)

    Next, for $ i\in I $, multiplying (16) at $ x = \nu_i $ by $ n_{i\alpha} $ for all $ \alpha\in \mathcal A_i $, then summing over all $ \alpha\in \mathcal A_i $, we get $ \sum_{\alpha\in \mathcal A_i} \mu_{\alpha}\partial_\alpha m \left(\nu_{i}\right) - n_{i\alpha}\Bigl( m|_{\Gamma_\alpha}\left(\nu_{i}\right) a|_{\Gamma_\alpha}\left(\nu_{i}\right)-c_\alpha\Bigr) = 0 $, and using (18), we obtain that

    $ αAiμααm(νi)niαa|Γα(νi)m|Γα(νi)=0,for all iI.
    $
    (20)

    Summarizing, we get the following boundary value problem for $ m $ (recalling that the coefficients $ n_{i\alpha} $ are defined in (6)):

    $ {μα2m+(ma)=0,x(ΓαV),αA,αAiμααm(νi)niαa|Γα(νi)m|Γα(νi)=0,νiV,m|Γα(νi)γiα=m|Γβ(νi)γiβ,α,βAi,νiV.
    $
    (21)

    Consider a continuum of indistinguishable agents moving on the network $ \Gamma $. The state of a representative agent at time $ t $ is a time-continuous controlled stochastic process $ X_t $ as defined in Section 1.2, where the control is the drift $ a_t $, supposed to be of the form $ a_t = a(X_t) $. The function $ X\mapsto a(X) $ is the feedback. Let $ m(\cdot, t) $ be the probability measure on $ \Gamma $ that describes the distribution of states at time $ t $.

    For a representative agent, the optimal control problem is of the form:

    $ ρ:=infaslim infT+1TEx[T0L(Xs,as)+V[m(,s)](Xs)ds],
    $
    (22)

    where $ \mathbb{E}_{x} $ stands for the expectation conditioned by the event $ X_0 = x $. The running cost depends separately on the control and on the distribution of states.

    ● The contribution of the control involves the Lagrangian $ L $, i.e., a real valued function defined on $ \left(\cup_{\alpha \in \mathcal{A}} \Gamma_\alpha \backslash \mathcal{V} \right) \times \mathbb R $. If $ x\in \Gamma_\alpha \backslash \mathcal{V} $ and $ a\in \mathbb R $, $ L(x,a) = L_\alpha(\pi_\alpha^{-1}(x),a) $, where $ L_\alpha $ is a continuous real valued function defined on $ [0,\ell_\alpha]\times \mathbb R $. We assume that $ \lim_{|a|\to \infty} \inf_{y\in \Gamma_\alpha} {L_\alpha(y,a)}/{|a|} = +\infty $.

    ● The contribution of the distribution of states involves the coupling cost operator, which can either be nonlocal, i.e., $ V:\mathcal{P}\left(\Gamma\right)\rightarrow \mathcal{C}^2 (\Gamma) $ (where $ \mathcal{P}\left(\Gamma\right) $ is the set of Borel probability measures on $ \Gamma $), or local, i.e., $ V[m](x) = F(m(x)) $ for a continuous function $ F: \mathbb R^+ \to \mathbb R $, assuming that $ m $ is absolutely continuous with respect to the Lebesgue measure and identifying with its density.

    Further assumptions on $ L $ and $ V $ will be made below.

    Let us assume that there is an optimal feedback law, i.e. a function $ a^\star $ defined on $ \Gamma $ which is sufficiently regular in the edges of the network, such that the optimal control at time $ t $ is given by $ a_t ^\star = a^\star(X_t) $. Then, almost surely if $ X_t\in \Gamma_\alpha\backslash \mathcal V $, $ d \pi_{\alpha}^{-1} (X_t) = a^\star_{\alpha}(\pi_{\alpha}^{-1} (X_t)) dt + \sqrt{ 2\mu_{\alpha}} dW_t $. An informal way to describe the behavior of the process at the vertices is as follows: if $ X_t $ hits $ \nu_{i}\in\mathcal{V} $, then it enters $ \Gamma_{\alpha} $, $ \alpha\in\mathcal{A}_{i} $ with probability $ p_{i\alpha}>0 $.

    Under suitable assumptions, the Ito calculus recalled in Section 1.2 and the dynamic programming principle lead to the following ergodic Hamilton-Jacobi equation on $ \Gamma $, more precisely the following boundary value problem:

    $ {μα2v+H(x,v)+ρ=V[m](x),x(ΓαV),αA,αAiγiαμααv(νi)=0,νiV,v|Γα(νi)=v|Γβ(νi),α,βAi,νiV,Γv(x)dx=0.
    $
    (23)

    We refer to [28,30] for the interpretation of the value function $ v $ and the ergodic cost $ \rho $.

    Let us comment the different equations in (23):

    1. The Hamiltonian $ H $ is a real valued function defined on $ \left(\cup_{\alpha \in \mathcal{A}} \Gamma_\alpha \backslash \mathcal{V} \right) \times \mathbb R $. For $ x\in \Gamma_\alpha \backslash \mathcal{V} $ and $ p\in \mathbb R $,

    $ H\left(x,p\right) = \sup\limits_{a}\left\{ -a p-L_{\alpha}\left(\pi_\alpha^{-1}(x),a\right)\right\}. $

    The Hamiltonians $ H|_{\Gamma_\alpha\times \mathbb R} $ are supposed to be $ C^1 $ and coercive with respect to $ p $ uniformly in $ x $ (see Section 1.4.1).

    2. The second equation in (23) is a Kirchhoff transmission condition (or Neumann boundary condition if $ \nu_i\in\partial\Gamma $); it is the consequence of the assumption on the behavior of $ X_s $ at vertices. It involves the positive constants $ \gamma_{i\alpha} $ defined in (19).

    3. The third condition means in particular that $ v $ is continuous at the vertices.

    4. The fourth equation is a normalization condition.

    If (23) has a smooth solution, then it provides a feedback law for the optimal control problem, i.e.

    $ a^{\star}(x) = -\partial_{p}H\left(x,\partial v\left(x \right)\right). $

    At the MFG equilibrium, $ m $ is the density of the invariant measure associated with the optimal feedback law, so, according to Section 1.2, it satisfies (21), where $ a $ is replaced by $ a^\star = -\partial_{p}H\left(x,\partial v\left(x\right)\right) $. We end up with the following system:

    $ {μα2v+H(x,v)+ρ=V([m]),xΓαV,αA,μα2m+(mpH(x,v))=0,xΓαV,αA,αAiγiαμαα(νi)=0,νiV,αAi[μααm(νi)+niαpHα(νi,v|Γα(νi))m|Γα(νi)]=0,νiV,v|Γα(νi)=v|Γβ(νi),α,βAi,νiV,m|Γα(νi)γiα=m|Γβ(νi)γiβ,α,βAi,νiV,Γv(x)dx=0,Γm(x)dx=1,m0.
    $
    (24)

    At a vertex $ \nu_i $, $ i\in I $, the transmission conditions for both $ v $ and $ m $ consist of $ d_{\nu_{i}} = \sharp( \mathcal A_i) $ linear relations, which is the appropriate number of relations to have a well posed problem. If $ \nu_i\in \partial \Gamma $, there is of course only one Neumann like condition for $ v $ and for $ m $.

    Remark 5. In [7], the authors assume that $ \gamma_{i\alpha} = \gamma_{i\beta} $ for all $ i\in I $, $ \alpha,\beta\in\mathcal{A}_i $. Therefore, the density $ m $ does not have jumps across the transition vertices.

    Let $ (\mu_\alpha)_{\alpha\in \mathcal{A}} $ be a family of positive numbers, and for each $ i\in I $ let $ ( \gamma_{i\alpha})_{\alpha\in\mathcal{A}_{i}} $ be a family of positive numbers such that $ \sum_{\alpha\in \mathcal A_i} \gamma_{i\alpha}\mu_\alpha = 1 $.

    Consider the Hamiltonian $ H:\Gamma\times\mathbb{R}\rightarrow\mathbb{R} $. We assume that for all $ \alpha\in\mathcal{A} $, we can define $ H_\alpha: [0,\ell_{\alpha}]\times\mathbb{R}\to \mathbb R $ and $ H|_{\Gamma_{\alpha}}: \Gamma_{\alpha}\times\mathbb{R}\rightarrow\mathbb{R} $ by (2) and, for some positive constants $ C_{0},C_{1},C_{2} $ and $ q\in\left(1,2\right] $,

    $ HαC1([0,α]×R);
    $
    (25)
    $ Hα(x,)is convex in p for each x[0,α];
    $
    (26)
    $ Hα(x,p)C0|p|qC1 for (x,p)[0,α]×R;
    $
    (27)
    $ |pHα(x,p)|C2(|p|q1+1) for (x,p)[0,α]×R.
    $
    (28)

    Remark 6. The Hamiltonian $ H $ is discontinuous at the vertices in general, although $ H_\alpha $ is $ C^1 $ up to the endpoints of $ \left[0,\ell_{\alpha}\right] $.

    Remark 7. From (28), there exists a positive constant $ C_{q} $ such that

    $ |Hα(x,p)|Cq(|p|q+1),for all (x,p)[0,α]×R.
    $
    (29)

    Below, we shall focus on local coupling operators $ V $, namely

    $ V[˜m](x)=F(m(x)) with FC([0,+);R),
    $
    (30)

    for all $ \tilde{m} $ which are absolutely continuous with respect to the Lebesgue measure and such that $ d\tilde{m}\left(x\right) = m\left(x\right)dx $. We shall also suppose that $ F $ is bounded from below, i.e., there exists a positive constant $ M $ such that

    $ F(r)M,for all r[0,+).
    $
    (31)

    Let us introduce two function spaces on $ \Gamma $, which will be the key ingredients in order to build weak solutions of (24).

    Definition 1.2. We define two Sobolev spaces, $ V: = H^1(\Gamma) $, see Definition 1.1, and

    $ W:={w:ΓR:wαH1(0,α) for all αA,w|Γα(νi)γiα=w|Γβ(νi)γiβ for all iI,α,βAi}
    $
    (32)

    which is also a Hilbert space, endowed with the norm $ \left\Vert w\right\Vert _{W} = \left(\sum\limits_{\alpha\in\mathcal{A}}\left\Vert w_{\alpha}\right\Vert^2 _{H^{1}\left(0,\ell_{\alpha}\right)}\right)^{\frac 1 2} $.

    Remark 8. We point out that, following Definition 1.1, functions in $ V $ are continuous on $ \Gamma $. By contrast, functions in $ W $ are discontinuous in general.

    Definition 1.3. Let the functions $ \psi\in W $ and $ \phi\in PC(\Gamma) $ be defined as follows:

    $ {ψα is affine on (0,α),ψ|Γα(νi)=γiα, if αAi,ψ is constant on the edges Γα which touch the boundary of Γ.
    $
    (33)
    $ {ϕα is affine on (0,α),ϕ|Γα(νi)=1γiα, if αAi,ϕ is constant on the edges Γα which touch the boundary of Γ.
    $
    (34)

    Note that both functions $ \psi,\phi $ are positive and bounded. We set $ \overline{\psi} = \max_{\Gamma}\psi $, $ \underline{\psi} = \min_{\Gamma}\psi $, $ \overline{\phi} = \max_{\Gamma}\phi $, $ \underline{\phi} = \min_{\Gamma}\phi $.

    Remark 9. One can see that $ v\in V\longmapsto v\psi $ is an isomorphism from $ V $ onto $ W $ and $ w\in W\longmapsto w\phi $ is the inverse isomorphism.

    Definition 1.4. Let the function space $ \mathcal{W}\subset W $ be defined as follows:

    $ W:={m:ΓR:mαC1([0,α]) for all αA,m|Γα(νi)γiα=m|Γβ(νi)γiβ for all iI,α,βAi}.
    $
    (35)

    Remark 10. A function $ m\in \mathcal{W} $ is in general discontinuous at the vertices of $ \Gamma $, although for any $ \alpha\in \mathcal A $, $ m_\alpha $ is $ C^1 $ in $ \left[0,\ell_{\alpha}\right] $.

    Definition 1.5. A solution of the Mean Field Games system (24) is a triple $ \left(v,\rho,m\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R}\times\mathcal{W} $ such that $ \left(v,\rho\right) $ is a classical solution of

    $ {μα2v+H(x,v)+ρ=F(m),in ΓαV,αA,αAiγiαμααv(νi)=0,if νiV,
    $
    (36)

    (note that $ v $ is continuous at the vertices from the definition of $ C^2 (\Gamma) $), and $ m $ satisfies

    $ αAΓα[μαmu+(mpH(x,v))u]dx=0,for all uV,
    $
    (37)

    where $ V $ is given in Definition 1.2.

    We are ready to state the main result:

    Theorem 1.6. If assumptions (25)-(28) and (30)-(31) are satisfied, then there exists a solution $ \left(v,m,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ of (24). If $ F $ is locally Lipschitz continuous, then $ v \in C^{2,1}(\Gamma) $. Moreover if $ F $ is strictly increasing, then the solution is unique.

    Remark 11. The proof of the existence result in [7] is valid only in the case when the coupling cost $ F $ is bounded.

    Remark 12. The existence result in Theorem 1.6 holds if we assume that the coupling operator $ V $ is non local and regularizing, i.e., $ V $ is a continuous map from $ \mathcal{P} $ to a bounded subset of $ \mathcal{F} $, with $ \mathcal{F}: = \left\{ f:\Gamma\rightarrow\mathbb{R}:\;f|_{\Gamma_{\alpha}}\in C^{0,\sigma}\left(\Gamma_{\alpha}\right)\right\} $. The proof, omitted in what follows, is similar to that of Lemma 4.1 below.

    This section contains elementary results on the solvability of some linear boundary value problems on $ \Gamma $. To the best of our knowledge, these results are not available in the literature.

    We recall that the constants $ \mu_\alpha $ and $ \gamma_{i\alpha} $ are defined in Section 1.2. Let $ \lambda $ be a positive number. We start with very simple linear boundary value problems, in which the only difficulty is the Kirchhoff condition:

    $ {μα2v+λv=f,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (38)

    where $ f\in W' $, $ W' $ is the topological dual of $ W $.

    Remark 13. We have already noticed that, if $ \nu_{i}\in\partial\Gamma $, the last condition in (38) boils down to a standard Neumann boundary condition $ \partial_{\alpha}v\left(\nu_{i}\right) = 0 $, in which $ \alpha $ is the unique element of $ \mathcal A_i $. Otherwise, if $ \nu_{i}\in\mathcal{V}\backslash\partial\Gamma $, the last condition in (38) is the Kirchhoff condition discussed above.

    Definition 2.1. A weak solution of (38) is a function $ v\in V $ such that

    $ Bλ(v,w)=f,wW,W,for all wW,
    $
    (39)

    where $ \mathscr{B}_{\lambda}:V\times W\rightarrow\mathbb{R} $ is the bilinear form defined as follows:

    $ \mathscr{B}{}_{\lambda}\left(v,w\right) = \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\left(\mu_{\alpha}\partial v\partial w+\lambda vw\right)dx. $

    Remark 14. Formally, (39) is obtained by testing the first line of (38) by $ w\in W $, integrating by part the left hand side on each $ \Gamma_\alpha $ and summing over $ \alpha\in \mathcal A $. There is no contribution from the vertices, because of the Kirchhoff conditions on the one hand and on the other hand the jump conditions satisfied by the elements of $ W $.

    Remark 15. By using the fact that $ \Gamma_\alpha $ are line segments, i.e. one dimensional sets and solving the differential equations, we see that if $ v $ is a weak solution of $ (38) $ with $ f\in PC\left(\Gamma\right) $, then $ v\in C^{2}\left(\Gamma\right) $.

    Let us first study the homogeneous case, i.e. $ f = 0 $.

    Lemma 2.2. The function $ v = 0 $ is the unique solution of the following boundary value problem

    $ {2v+λv=0,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (40)

    Proof. Let $ \mathcal{I}_{i}: = \left\{ k\in I:\; k\not = i;\; \nu_{k}\in\Gamma_{\alpha}\hbox{ for some }\alpha\in\mathcal{A}_{i}\right\} $ be the set of indices of the vertices which are connected to $ \nu_{i} $. By Remark 1, it is not restrictive to assume (in the remainder of the proof) that for all $ k\in\mathcal{I}_{i} $, $ \Gamma_{\alpha} = \Gamma_{\alpha_{ik}} = \left[\nu_{i},\nu_{k}\right] $ is oriented from $ \nu_{i} $ to $ \nu_{k} $.

    For $ k\in\mathcal{I}_{i} $, $ \Gamma_\alpha = [\nu_i, \nu_k] $, using the parametrization (1), the linear differential eqaution (40) in the edge $ \Gamma_{\alpha} $ is

    $ -v_{\alpha}''\left(y\right)+\lambda v_{\alpha}\left(y\right) = 0,\quad\text{in }\left(0,\ell_{\alpha}\right), $

    whose solution is

    $ vα(y)=ζαcosh(λy)+ξαsinh(λy),
    $
    (41)

    with

    $ {ζα=vα(0)=v(νi),ζαcosh(λα)+ξαsinh(λα)=vα(α)=v(νk).
    $

    It follows that $ \partial_{\alpha}v\left(\nu_{i}\right) = -\sqrt{\lambda}\xi_{\alpha} = -\dfrac{\sqrt{\lambda}}{\sinh\left(\sqrt{\lambda}\ell_{\alpha}\right)}\left[v\left(\nu_{k}\right)-v\left(\nu_{i}\right)\cosh\left(\sqrt{\lambda}\ell_{\alpha}\right)\right] $. Hence, the transmission condition in (40) becomes: for all $ i\in I $,

    $ 0=αAiγiαμααv(νi)=kIiλγiαikμαikcosh(λαik)sinh(λαik)v(νi)kIiλγiαikμαiksinh(λαik)v(νk).
    $

    Therefore, we obtain a system of linear equations of the form $ MU = 0 $ with $ M = \left(M_{ij}\right)_{1\le i,j\le N} $, $ N = \sharp(I) $, and $ U = \left(v\left(\nu_{1}\right),\ldots,v\left(\nu_{N}\right)\right)^{T} $, where $ M $ is defined by

    $ {Mii=kIiγiαikμαikcosh(λαik)sinh(λαik)>0,Mik=γiαikμαiksinh(λαik)0,kIi,Mik=0,kIi.
    $

    For all $ i\in I $, since $ \cosh\left(\sqrt{\lambda}\ell_{\alpha_{ik}}\right)>1 $ for all $ k\in\mathcal{I}_{i} $, the sum of the entries on each row is positive and $ M $ is diagonal dominant. Thus, $ M $ is invertible and $ U = 0 $ is the unique solution of the system. Finally, by solving the ODE in each edge $ \Gamma_{\beta} $ with $ v_{\beta}\left(0\right) = v_{\beta}\left(\ell_{\beta}\right) = 0 $, we get that $ v = 0 $ on $ \Gamma $.

    Let us now study the non-homogeneous problems (38).

    Lemma 2.3. For any $ f $ in $ W' $, (38) has a unique weak solution $ v $ in $ V $, see Definition 1.2. Moreover, there exists a constant $ C $ such that $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'}. $

    Proof. First of all, we claim that for $ \lambda_{0}>0 $ large enough and any $ f\in W' $, the problem

    $ Bλ(v,w)+λ0(v,w)=f,wW,W
    $
    (42)

    has a unique solution $ v\in V $. Let us prove the claim. Let $ v\in V $, then $ \hat{w}: = v\psi $ belongs to $ W $, where $ \psi $ is given by Definition 1.3. Let us set $ \overline{\partial\psi}: = \max_{\Gamma}\left|\partial\psi\right| $ and $ \underline{\psi}: = \min_{\Gamma}\psi >0 $, ($ \partial\psi $ is bounded, see Definition 1.3); we get

    $ Bλ(v,ˆw)+λ0(v,ˆw)=αAΓα[μα|v|2ψ+μα(vv)ψ+(λ+λ0)v2ψ]dxαAΓα[μαψ_2|v|2+(λ0ψ_μα¯ψ22ψ_)v2]dx.
    $
    (43)

    When $ \lambda_{0}\ge\dfrac{\mu_{\alpha}}{2}+\dfrac{\mu_{\alpha}\overline{\partial\psi}^{2}}{2\underline{\psi}^{2}} $ for all $ \alpha\in\mathcal{A} $, we obtain that $ \mathscr{B}_{\lambda}\left(v,\hat{w}\right)+\lambda_{0}\left(v,\hat{w}\right)\ge\dfrac{\underline{\mu}\; \underline{\psi}}{2}\left\Vert v\right\Vert _{V}^{2}\ge\dfrac{\underline{\mu}\; \underline{\psi}}{2C_{\psi}}\left\Vert v\right\Vert _{V}\left\Vert \hat{w}\right\Vert _{W} $, using the fact that, from Remark 9, there exists a positive constant $ C_{\psi} $ such that $ \left\Vert v\psi\right\Vert _{W}\le C_{\psi}\left\Vert v\right\Vert _{V} $ for all $ v\in V $. This yields

    $ infvVsupwWBλ(v,w)+λ0(v,w)vVwWμ_ψ_2Cψ.
    $
    (44)

    Using a similar argument for any $ w\in W $ and $ \hat{v} = w\phi $, where $ \phi $ is given in Definition 1.3, we obtain that for $ \lambda_{0} $ large enough, there exist a positive constant $ C_{\phi} $ such that

    $ infwWsupvVBλ(v,w)+λ0(v,w)wWvVμ_ϕ_2Cϕ.
    $
    (45)

    From (44) and (45), by the Banach-Necas-Babuška lemma (see [13]), for $ \lambda_{0} $ large enough, for any $ f\in W' $, there exists a unique solution $ v\in V $ of (42) and $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'} $ for a positive constant $ C $. Hence, our claim is proved.

    Now, we fix $ \lambda_{0} $ large enough and we define the continuous linear operator $ \overline{R}_{\lambda_{0}}:W'\rightarrow V $ where $ \overline{R}_{\lambda_{0}}\left(f\right) = v $ is the unique solution of (42). Since the injection $ \mathcal{I} $ from $ V $ to $ W' $ is compact, then $ \mathcal{I}\circ\overline{R}_{\lambda_{0}} $ is a compact operator from $ W' $ into $ W' $. By the Fredholm alternative (see [18]), one of the following assertions holds:

    $ There exists ¯vW{0} such that (Idλ0(I¯Rλ0))¯v=0.
    $
    (46)
    $ For any gW, there exists a unique ¯vW such that (Idλ0(I¯Rλ0))¯v=g.
    $
    (47)

    We claim that (47) holds. Indeed, assume by contradiction that (46) holds. Then there exists $ \overline{v}\ne0 $ such that $ \overline{v}\in V $ and $ \mathcal{I}\circ\overline{R}_{\lambda_{0}}\overline{v} = \dfrac{\overline{v}}{\lambda_{0}} $. Therefore, $ \overline{v}\in V $, and $ \mathscr{B}_{\lambda}\left(\dfrac{\overline{v}}{\lambda_{0}},w\right)+\lambda_{0}\left(\dfrac{\overline{v}}{\lambda_{0}},w\right) = \left(\overline{v},w\right) $, for all $ w\in W $. This yields that $ \mathscr{B}_{\lambda}\left(\overline{v},w\right) = 0 $ for all $ w\in W $ and by Lemma 2.2, we get that $ \overline{v} = 0 $, which leads us to a contradiction. Hence, our claim is proved.

    Then, (47) implies that there exists a positive constant $ C $ such that for all $ f\in W' $, (38) has a unique weak solution $ v $ and that $ \left\Vert v\right\Vert _{V}\le C\left\Vert f\right\Vert _{W'} $, see [11] for the details.

    Consider $ b\in PC\left(\Gamma\right) $. This paragraph is devoted to the following boundary value problem including a Kolmogorov equation

    $ {μα2v+bv=0,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI.
    $
    (48)

    Definition 2.4. A weak solution of (48) is a function $ v\in V $ such that

    $ \mathscr{A}^{\star}\left(v,w\right) = 0,\quad\text{for all }w\in W, $

    where $ \mathscr{A}^{\star}:V\times W\rightarrow\mathbb{R} $ is the bilinear form defined by

    $ \mathscr{A^{\star}}\left(v,w\right): = \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\left(\mu_{\alpha}\partial v\partial w+b\partial vw\right)dx. $

    As in Remark 15, if $ v $ is a weak solution of $ (48) $, then $ v\in C^{2}\left(\Gamma\right) $.

    The uniqueness of solutions of (48) up to the addition of constants is obtained by using a maximum principle:

    Lemma 2.5. For $ b\in PC\left(\Gamma\right) $, the solutions of (48) are the constant functions on $ \Gamma $.

    Proof of Lemma 2.5. First of all, any constant function on $ \Gamma $ is a solution of (48). Now let $ v $ be a solution of (48) then $ v\in C^{2}\left(\Gamma\right) $. Assume that the maximum of $ v $ over $ \Gamma $ is achieved in $ \Gamma_{\alpha} $; by the maximum principle, it is achieved at some endpoint $ \nu_{i} $ of $ \Gamma_{\alpha} $. Without loss of generality, using Remark 1, we can assume that $ \pi_{\beta}\left(\nu_{i}\right) = 0 $ for all $ \beta\in\mathcal{A}_{i} $. We have $ \partial_{\beta}v\left(\nu_{i}\right)\ge 0 $ for all $ \beta\in\mathcal{A}_{i} $ because $ \nu_{i} $ is the maximum point of $ v $. Since all the coefficients $ \gamma_{i\beta},\mu_{\beta} $ are positive, by the Kirchhoff condition if $ \nu_{i} $ is a transition vertex, or by the Neumann boundary condition if $ \nu_{i} $ is a boundary vertex, we infer that $ \partial_{\beta}v\left(\nu_{i}\right) = 0 $ for all $ \beta\in\mathcal{A}_{i} $. This implies that $ \partial{v_{\beta}} $ is a solution of the first order linear homogeneous differential equation $ u'+b_{\beta}u = 0 $, on $ \left[0,\ell_{\beta}\right] $, with $ u\left(0\right) = 0 $. Therefore, $ \partial v_{\beta}\equiv 0 $ and $ v $ is constant on $ \Gamma_{\beta} $ for all $ \beta \in\mathcal{A}_{i} $. We can propagate this argument, starting from the vertices connected to $ \nu_i $. Since the network $ \Gamma $ is connected and $ v $ is continuous, we obtain that $ v $ is constant on $ \Gamma $.

    This paragraph is devoted to the dual boundary value problem of (48); it involves a Fokker-Planck equation:

    $ {μα2m(bm)=0,in ΓαV,αA,m|Γα(νi)γiα=m|Γβ(νi)γiβ,α,βAi,iI,αAi[niαb|Γα(νi)m|Γα(νi)+μααm(νi)]=0,iI,
    $
    (49)

    where $ b\in PC\left(\Gamma\right) $, with

    $ m0,Γmdx=1.
    $
    (50)

    First of all, let $ \lambda_0 $ be a nonnegative constant; for all $ h\in V' $, we introduce the modified boundary value problem

    $ {λ0mμα2m(bm)=h,in ΓαV,αA,m|Γα(νi)γiα=m|Γβ(νi)γiβ,α,βAi,iI,αAi[niαb|Γα(νi)m|Γα(νi)+μααm(νi)]=0,iI.
    $
    (51)

    Definition 2.6. For $ \lambda\in \mathbb R $, consider the bilinear form $ \mathscr{A}_{\lambda}:W\times V\rightarrow\mathbb{R} $ defined by

    $ \mathscr{A}_{\lambda}\left(m,v\right) = \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\left[\lambda mv+\left(\mu_{\alpha}\partial m+bm\right)\partial v\right]dx. $

    A weak solution of (51) is a function $ m\in W $ such that

    $ \mathscr{A}_{\lambda_{0}}(m,v) = \langle h,v\rangle _{V',V},\quad \text{for all }v\in V. $

    A weak solution of (49) is a function $ m\in W $ such that

    $ A0(m,v):=αAΓα(μαm+bm)vdx=0,for all vV.
    $
    (52)

    Remark 16. Formally, to get (52), we multiply the first line of (49) by $ v\in V $, integrate by part, sum over $ \alpha\in\mathcal{A} $ and use the third line of (49) to see that there is no contribution from the vertices.

    Theorem 2.7. For any $ b\in PC\left(\Gamma\right) $,

    ● (Existence) There exists a solution $ \widehat{m} \in W $ of (49)-(50) satisfying

    $ ˆmWC,0ˆmC,
    $
    (53)

    where the constant $ C $ depends only on $ \left\Vert b\right\Vert _{\infty} $ and $ \left\{ \mu_{\alpha}\right\} _{\alpha\in A} $. Moreover, $ \widehat{m}_{\alpha}\in C^{1}\left(0,\ell_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $. Hence, $ \widehat{m}\in\mathcal{W} $.

    ● (Uniqueness) $ \widehat{m} $ is the unique solution of (49)-(50).

    ● (Strictly positive solution) $ \widehat{m} $ is strictly positive.

    Proof of existence in Theorem 2.7. We divide the proof of existence into three steps:

    Step 1. Let $ \lambda_{0} $ be a large positive constant that will be chosen later. We claim that for $ \overline{m}\in L^2(\Gamma) $ and $ h: = \lambda_0 \overline{m} \in L^2(\Gamma)\subset V' $, (51) has a unique solution $ m\in W $. This allows us to define a linear operator as follows:

    $ T:L^{2}\left(\Gamma\right) \longrightarrow W,\quad\quad T(\overline m) = m, $

    where $ m $ is the solution of (51) with $ h = \lambda_0 \overline{m} $. We are going to prove that $ T $ is well-defined and continuous, i.e, for all $ \overline{m}\in L^{2}\left(\Gamma\right) $, (51) has a unique solution that depends continuously on $ \overline{m} $. For $ w\in W $, set $ \widehat{v}: = w\phi\in V $ where $ \phi $ is given by Definition 1.3. We have

    $ Aλ0(w,ˆv)=αAΓα[λ0ϕw2+(μαw+bw)(wϕ)]dx=αAΓα[(λ0ϕ+bϕ)w2+(μαϕ+bϕ)ww+μαϕ(w)2]dx.
    $

    It follows that when $ \lambda_{0} $ is large enough (larger than a constant that only depends on $ b,\phi $ and $ \mu_{\alpha} $), $ \mathscr{A}_{\lambda_{0}}\left(w,\widehat{v}\right)\ge\widehat{C}_{\lambda_{0}}\left\Vert w\right\Vert _{W}^{2} $ for some positive constant $ \widehat{C}_{\lambda_{0}} $. Moreover, by Remark 9, there exists a positive constant $ \widehat{C}_{\phi} $ such that for all $ w\in W $, we have $ \left\Vert w\phi\right\Vert _{V}\le C_{\phi}\left\Vert w\right\Vert _{W} $. This yields

    $ \inf\limits_{w\in W}\sup\limits_{v\in V}\dfrac{\mathscr{A}_{\lambda_{0}}\left(w,v\right)}{\left\Vert v\right\Vert _{V}\left\Vert w\right\Vert _{W}}\ge\dfrac{\widehat{C}_{\lambda_{0}}}{C_{\phi}}. $

    Using similar arguments, for $ \lambda_{0} $ large enough, there exist two positive constants $ C_{\lambda_{0}} $ and $ C_{\psi} $ such that

    $ \inf\limits_{v\in V}\sup\limits_{w\in W}\dfrac{\mathscr{A}_{\lambda_{0}}\left(w,v\right)}{\left\Vert w\right\Vert _{W}\left\Vert v\right\Vert _{V}}\ge\dfrac{C_{\lambda_{0}}}{C_{\psi}}. $

    From Banach-Necas-Babuška lemma (see [13]), there exists a constant $ \overline{C} $ such that for all $ \overline{m}\in L^{2}\left(\Gamma\right) $, there exists a unique solution $ m $ of (51) with $ h = \lambda_0 \overline{m} $ and $ \left\Vert m\right\Vert _{W}\le\overline{C}\left\Vert \overline{m}\right\Vert _{L^{2}\left(\Gamma\right)} $. Hence, the map $ T $ is well-defined and continuous from $ L^{2}\left(\Gamma\right) $ to $ W $.

    Step 2. Let $ K $ be the set defined by

    $ K: = \left\{ m\in L^{2}\left(\Gamma\right):m\ge0\text{ and }\int_{\Gamma}mdx = 1\right\} . $

    We claim that $ T\left(K\right)\subset K $ which means $ \int_{\Gamma}m = 1 $ and $ m\ge0 $. Indeed, using $ v = 1 $ as a test-function in (51), we have $ \int_{\Gamma}mdx = \int_{\Gamma}\overline{m}dx = 1 $. Next, consider the negative part $ m^- $ of $ m $ defined by $ m^-(x) = - \mathbb{1}_{\{m(x)<0\}} m(x) $. Notice that $ m^{-}\in W $ and $ m^{-}\phi\in V $, where $ \phi $ is given by Definition 1.3. Using $ m^{-}\phi $ as a test-function in (51) yields

    $ αAΓα[(λ0ϕ+bϕ)(m)2+μα(m)2ϕ+(μαϕ+bϕ)mm]dx=Γλ0¯mmϕdx.
    $

    We can see that the right hand side is non-negative. Moreover, for $ \lambda_{0} $ large enough (larger than the same constant as above, which only depends on $ b,\phi $ and $ \mu_{\alpha} $), the left hand side is non-positive. This implies that $ m^{-} = 0 $, and hence $ m\ge 0 $. Therefore, the claim is proved.

    Step 3. We claim that $ T $ has a fixed point. Let us now focus on the case when $ \overline m \in K $. Using $ m\phi $ as a test function in (51) yields

    $ αAΓα[(λ0ϕ+bϕ)m2+μα(m)2ϕ+(μαϕ+bϕ)m(m)]dx=Γλ0¯mmϕdx.
    $
    (54)

    Since $ H^{1}\left(0,\ell_{\alpha}\right) $ is continuously embedded in $ L^{\infty}\left(0,\ell_{\alpha}\right) $, there exists a positive constant $ C $ (independent of $ \overline m \in K $) such that

    $ \int_{\Gamma}\overline{m}m\phi dx\le\int_{\Gamma}\overline{m}dx\left\Vert m\right\Vert _{L^{\infty}\left(\Gamma\right)}\overline{\phi} = \Vert m \Vert _{L^{\infty}\left(\Gamma\right)} \overline{\phi} \le C\left\Vert m\right\Vert _{W}. $

    Hence, from (54), for $ \lambda_{0} $ large enough, there exists a positive constant $ C_{1} $ such that $ C_{1}\left\Vert m\right\Vert _{W}^{2}\le\lambda_{0}C\left\Vert m\right\Vert _{W} $. Thus

    $ mWλ0CC1.
    $
    (55)

    Therefore, $ T\left(K\right) $ is bounded in $ W $. Since the bounded subsets of $ W $ are relatively compact in $ L^{2}\left(\Gamma\right) $, $ \overline{T\left(K\right)} $ is compact in $ L^{2}\left(\Gamma\right) $. Moreover, we can see that $ K $ is closed and convex in $ L^{2}\left(\Gamma\right) $. By Schauder fixed point theorem, see [18,Corollary 11.2], $ T $ has a fixed point $ \widehat{m}\in K $ which is also a solution of (49) and $ \left\Vert \widehat{m}\right\Vert _{W}\le \lambda_{0}C/C_{1} $.

    Finally, from the differential equation in (49), for all $ \alpha\in\mathcal{A} $, $ \left(\widehat{m}_{\alpha}'+b_{\alpha}\widehat{m}_{\alpha}\right)' = 0 $ on $ (0,\ell_{\alpha}) $. Hence, there exists a constant $ C_{\alpha} $ such that

    $ ˆmα+bαˆmα=Cα,for all x(0,α).
    $
    (56)

    It follows that $ \widehat{m}'_{\alpha}\in C( [0,\ell_{\alpha}]) $, for all $ \alpha\in\mathcal{A} $. Hence $ \widehat{m}_{\alpha}\in C^{1}([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $. Thus, $ \widehat{m}\in \mathcal{W} $.

    Remark 17. Let $ m\in W $ be a solution of (49). If $ b,\partial b\in PC\left(\Gamma\right) $, standard arguments yield that $ m_{\alpha}\in C^{2} ([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $. Moreover, by Theorem 2.7, there exists a constant $ C $ which depends only on $ \left\Vert b\right\Vert _{\infty},\left\{ \left\Vert \partial b_{\alpha}\right\Vert _{\infty}\right\} _{\alpha\in\mathcal{A}} $ and $ \mu_{\alpha} $ such that $ \left\Vert m_{\alpha}\right\Vert _{C^{2}(0,\ell_{\alpha})}\le C $ for all $ \alpha\in\mathcal{A} $.

    Proof of the positivity in Theorem 2.7. From (50), $ \widehat{m} $ is non-negative on $ \Gamma $. Assume by contradiction that there exists $ x_{0}\in\Gamma_{\alpha} $ for some $ \alpha\in\mathcal{A} $ such that $ \widehat{m}|_{\Gamma_{\alpha}}\left(x_{0}\right) = 0 $. Therefore, the minimum of $ \widehat{m} $ over $ \Gamma $ is achieved at $ x_{0} \in \Gamma_{\alpha} $. If $ x_{0}\in \Gamma_{\alpha}\backslash \mathcal{V} $, then $ \partial \widehat{m}(x_{0}) = 0 $. In (56), we thus have $ C_{\alpha} = 0 $, and hence $ \widehat{m}_{\alpha} $ satisfies

    $ \widehat{m}_{\alpha}'+b_{\alpha}\widehat{m}_{\alpha} = 0,\quad \text{on }\left[0,\ell_{\alpha}\right],\\ $

    with $ \widehat{m}_{\alpha}\left(\pi_{\alpha} ^{-1}(x_{0})\right) = 0 $. It follows that $ \widehat m_{\alpha}\equiv 0 $ and $ \widehat{m}|_{{\Gamma_{\alpha}}}(\nu_{i}) = \widehat{m}|_{{\Gamma_{\alpha}}}(\nu_{j}) = 0 $ if $ \Gamma_\alpha = [\nu_{i},\nu_{j}] $.

    Therefore, it is enough to consider $ x_{0} \in \mathcal{V} $.

    Now, from Remark 1, we may assume without loss of generality that $ x_{0} = \nu_{i} $ and $ \pi_{\beta}(\nu_{i}) = 0 $ for all $ \beta \in \mathcal{A}_i $. We have the following two cases.

    Case 1. if $ x_{0} = \nu_{i} $ is a transition vertex, then, since $ \widehat{m} $ belongs to $ W $, we get

    $ ˆm|Γβ(νi)=γiβγiαˆm|Γα(νi)=0,for all βAi.
    $
    (57)

    This yields that $ \nu_{i} $ is also a minimum point of $ \widehat{m}|_{\Gamma_{\beta}} $ for all $ \beta\in\mathcal{A}_{i} $. Thus $ \partial_{\beta}\widehat{m}\left(\nu_{i}\right)\le0 $ for all $ \beta\in\mathcal{A}_{i} $. From the transmission condition in (49) which has a classical meaning from the regularity of $ \widehat{m} $, $ \partial_{\beta}\widehat{m}\left(\nu_{i}\right) = 0 $, since all the coefficients $ \mu_{\beta} $ are positive. From (56), for all $ \beta\in\mathcal{A}_{i} $, we have

    $ C_{\beta} = \widehat{m}'_{\beta}(0)+b_{\beta}(0)\widehat{m}_{\beta}(0) = 0. $

    Therefore, $ \widehat{m}'_{\beta}(y)+b_{\beta}(y)\widehat{m}_{\beta}(y) = 0 $, for all $ y\in [0,\ell_{\beta}] $ with $ \widehat{m}_\beta(0) = 0 $. This implies that $ \widehat{m}_{\beta}\equiv 0 $ for all $ \beta \in \mathcal{A}_i $. We can propagate the arguments from the vertices connected to $ \nu_i $. Since $ \Gamma $ is connected, we obtain that $ \widehat{m}\equiv 0 $ on $ \Gamma $.

    Case 2. if $ x_{0} = \nu_{i} $ is a boundary vertex, then the Robin condition in (49) implies that $ \partial_{\alpha}\widehat{m}\left(\nu_{i}\right) = 0 $ since $ \mu_{\alpha} $ is positive. From (56), we have $ C_{\alpha} = 0 $. Therefore, $ \widehat{m}'_{\alpha}(y)+b_{\alpha}(y)\widehat{m}_{\alpha}(y) = 0 $, for all $ y\in [0,\ell_{\alpha}] $ with $ \widehat{m}_\alpha(0) = 0 $. This implies that $ \widehat{m}\left(\nu_{j}\right) = 0 $ where $ \nu_{j} $ is the other endpoint of $ \Gamma_{\alpha} $. We are back to Case 1, so $ \widehat{m}\equiv 0 $ on $ \Gamma $.

    Finally, we have found that $ \widehat{m}\equiv 0 $ on $ \Gamma $, in contradiction with $ \int_{\Gamma}\widehat{m}dx = 1 $.

    Now we prove uniqueness for (49)-(50).

    Proof of uniqueness in Theorem 2.7. The proof of uniqueness is similar to the argument in [7,Proposition 13]. As in the proof of Lemma 2.3, we can prove that for $ \lambda_{0} $ large enough, there exists a constant $ C $ such that for any $ f\in V' $, there exists a unique $ w\in W $ which satisfies

    $ Aλ0(w,v)=f,vV,V for all vV.
    $
    (58)

    and $ \left\Vert w\right\Vert _{W}\le C\left\Vert f\right\Vert _{V'} $. This allows us to define the continuous linear operator

    $ Sλ0:L2(Γ)W,fw,
    $

    where $ w $ is a solution of (58). Then we define $ R_{\lambda_{0}} = \mathcal{J}\circ S_{\lambda_{0}} $ where $ \mathcal{J} $ is the injection from $ W $ in $ L^{2}\left(\Gamma\right) $, which is compact. Obviously, $ R_{\lambda_{0}} $ is a compact operator from $ L^{2}\left(\Gamma\right) $ into $ L^{2}\left(\Gamma\right) $. Moreover, $ m\in W $ is a solution of (49) if and only if $ m\in\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) $. By Fredholm alternative, see [18], $ \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) = \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) $.

    In order to characterize $ R_{\lambda_{0}}^{\star} $, we now consider the following boundary value problem for $ g\in L^2(\Gamma)\subset W' $:

    $ {λ0vμα2v+bv=g,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi)α,βAi,iI,αAiγiαμααv(νi)=0,iI.
    $
    (59)

    A weak solution of (59) is a function $ v\in V $ such that

    $ \mathscr{T}_{\lambda_{0}}\left(v,w\right): = \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}(\lambda_{0}vw+\mu_{\alpha}\partial v\partial w+bw\partial v)dx = \int_{\Gamma}gwdx,\quad\text{for all }w\in W. $

    Using similar arguments as in the proof of existence in Theorem 2.7, we see that for $ \lambda_0 $ large enough and all $ g\in L^{2}\left(\Gamma\right) $, there exists a unique solution $ v\in V $ of (59). Moreover, there exists a constant $ C $ such that $ \left\Vert v\right\Vert _{V}\le C\left\Vert g\right\Vert _{L^{2}\left(\Gamma\right)} $ for all $ g\in L^{2}\left(\Gamma\right) $. This allows us to define a continuous operator

    $ Tλ0:L2(Γ)V,gv.
    $

    Then we define $ \tilde{R}_{\lambda_{0}} = \mathcal{I}\circ T_{\lambda_{0}} $ where $ \mathcal{I} $ is the injection from $ V $ in $ L^{2}\left(\Gamma\right) $. Since $ \mathcal{I} $ compact, $ \tilde{R}_{\lambda_{0}} $ is a compact operator from $ L^{2}\left(\Gamma\right) $ into $ L^{2}\left(\Gamma\right) $. For any $ g\in L^2(\Gamma) $, set $ v = T_{\lambda_{0}}g $. Noticing that $ \mathscr{T}_{\lambda_{0}}(v,w) = \mathcal{A}_{\lambda_{0}}(w,v) $ for all $ v\in V,w\in W $, we obtain that

    $ \left(g,R_{\lambda_{0}}f\right)_{L^{2}\left(\Gamma\right)} = \mathscr{T}_{\lambda_{0}}\left(v,S_{\lambda_{0}}f\right) = \mathscr{A}_{\lambda_{0}}\left(S_{\lambda_{0}}f,v\right) = \left(f,v\right)_{L^{2}\left(\Gamma\right)} = (f,\tilde{R}_{\lambda_{0}}g)_{L^{2}\left(\Gamma\right)}. $

    Thus $ R_{\lambda_{0}}^{\star} = \tilde{R}_{\lambda_{0}} $. But $ \ker\left(Id-\lambda_{0}\tilde R_{\lambda_{0}}\right) $ is the set of solutions of (48), which, from Lemma 2.5, consists of constant functions on $ \Gamma $.

    This implies that $ \dim \ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) = 1 $ and then that $ \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) = \dim\ker\left(Id-\lambda_{0}R_{\lambda_{0}}^{\star}\right) = 1 $.

    Finally, since the solutions $ m $ of (49) are in $ \ker\left(Id-\lambda_{0}R_{\lambda_{0}}\right) $ and satisfy the normalization condition $ \int_{\Gamma}mdx = 1 $, we obtain the desired uniqueness property in Theorem 2.7.

    This section is devoted to the following boundary value problem including a Hamilton-Jacobi equation:

    $ {μα2v+H(x,v)+λv=0,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (60)

    where $ \lambda $ is a positive constant and the Hamiltonian $ H:\Gamma\times\mathbb{R}\rightarrow\mathbb{R} $ is defined in Section 1, except that, in (60) and the whole Section 3.1 below, the Hamiltonian contains the coupling term, i.e, $ H\left(x,\partial v\right) $ in (60) plays the role of $ H\left(x,\partial v\right)-F\left(m\left(x\right)\right) $ in (24).

    Definition 3.1  ● A classical solution of (60) is a function $ v\in C^{2}\left(\Gamma\right) $ which satisfies (60) pointwise.

    ● A weak solution of (60) is a function $ v\in V $ such that

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\left(\mu_{\alpha}\partial v\partial w+H\left(x,\partial v\right)w+\lambda vw\right)dx = 0\quad\text{for all }w\in W. $

    Proposition 1. Assume that

    $ HαC([0,α]×R),
    $
    (61)
    $ |H(x,p)|C2(1+|p|2)for all xΓ,pR,
    $
    (62)

    where $ C_{2} $ is a positive constant. There exists a classical solution $ v $ of (60). Moreover, if $ H_{\alpha} $ is locally Lipschitz with respect to both variables for all $ \alpha\in\mathcal{A} $, then the solution $ v $ belongs to $ C^{2,1}\left(\Gamma\right) $.

    Remark 18. Assume (61) and that $ v\in H^2(\Gamma)\subset V $ is a weak solution of (60). From the compact embedding of $ H^{2}\left(0,\ell_{\alpha}\right) $ into $ C^{1,\sigma}([0,\ell_{\alpha}]) $ for all $ \sigma\in (0,1/2) $, we get $ v\in C^{1,\sigma}(\Gamma) $. Therefore, from the differential equation in (60) $ \mu_{\alpha}\partial ^2 v_{\alpha}(\cdot) = H_\alpha(\cdot,\partial v_{\alpha}(\cdot))+\lambda v_{\alpha}(\cdot)\in C([0,\ell_{\alpha}]) $. It follows that $ v $ is a classical solution of (60).

    Remark 19. Assume now that $ H $ is locally Lipschitz continuous and that $ v\in H^2(\Gamma)\subset V $ is a weak solution of (60). From Remark 18, $ v \in C^{1,\sigma}(\Gamma) $ for $ \sigma\in (0,1/2) $ and the function $ -\lambda v_{\alpha}-H_{\alpha}\left(\cdot,\partial v_{\alpha}\right) $ belongs to $ C^{0,\sigma}([0,\ell_\alpha]) $. Then, from the first line of (60), $ v \in C^{2,\sigma}(\Gamma) $. This implies that $ \partial v_\alpha\in {\rm{Lip}}[0,\ell_\alpha] $ and using the differential equation again, we see that $ v\in C^{2,1}(\Gamma) $.

    Let us start with the case when $ H $ is a bounded Hamiltonian.

    Lemma 3.2. Assume (61) and for some $ C_{H}>0 $,

    $ |H(x,p)|CH,for all(x,p)Γ×R.
    $
    (63)

    There exists a classical solution $ v $ of (60). Moreover, if $ H_{\alpha} $ is locally Lipschitz in $ [0,\ell_\alpha]\times \mathbb R $ for all $ \alpha\in\mathcal{A} $ then the solution $ v $ belongs to $ C^{2,1}\left(\Gamma\right) $.

    Proof of Lemma 3.2. For any $ u\in V $, from Lemma 2.3, the following boundary value problem:

    $ {μα2v+λv=H(x,u),if xΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (64)

    has a unique weak solution $ v\in V $. This allows us to define the map $ T: V \longrightarrow V $ by $ T(u): = v $. Moreover, from Lemma 2.3, there exists a constant $ C $ such that

    $ vVCH(x,u)L2(Γ)CCH|Γ|1/2,
    $
    (65)

    where $ |\Gamma| = \Sigma_{\alpha\in\mathcal{A}}\ell_{\alpha} $. Therefore, from the differential equation in (64),

    $ μ_2vL2(Γ)λvL2(Γ)+H(x,u)L2(Γ)λvV+CH|Γ|1/2(λC+1)CH|Γ|1/2,
    $
    (66)

    where $ \underline{\mu}: = \min_{\alpha\in\mathcal{A}}\mu_{\alpha} $. From (65) and (66), $ T\left(V\right) $ is a bounded subset of $ H^{2}\left(\Gamma\right) $, see Definition 1.1. From the compact embedding of $ H^{2}\left(\Gamma\right) $ into $ V $, we deduce that $ \overline{T\left(V\right)} $ is a compact subset of $ V $.

    Next, we claim that $ T $ is continuous from $ V $ to $ V $. Assuming that

    $ {unu,in {V},vn=T(un),for all {n},v=T(u),
    $
    (67)

    we need to prove that $ v_{n}\rightarrow v $ in $ V $. Since $ \left\{ v_{n}\right\} $ is uniformly bounded in $ H^{2}\left(\Gamma\right) $, then, up to the extraction of a subsequence, $ v_{n}\rightarrow\widehat{v} $ in $ C^{1,\sigma}\left(\Gamma\right) $ for some $ \sigma\in (0,1/2) $. From (67), we have that $ \partial u_{n}\rightarrow\partial u $ in $ L^{2}\left(\Gamma_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $. This yields that, up to another extraction of a subsequence, $ \partial u_{n}\rightarrow\partial u $ almost everywhere in $ \Gamma_{\alpha} $. Thus $ H\left(x,\partial u_{n}\right)\rightarrow H\left(x,\partial u\right) $ in $ L^{2}\left(\Gamma_{\alpha}\right) $ by Lebesgue dominated convergence theorem. Hence, $ \widehat{v} $ is a weak solution of (64). Since the latter is unique, $ \widehat{v} = v $ and we can conclude that the whole sequence $ v_{n} $ converges to $ v $. The claim is proved.

    From Schauder fixed point theorem, see [18,Corollary 11.2], $ T $ admits a fixed point which is a weak solution of (60). Moreover, recalling that $ v\in H^2(\Gamma) $, we obtain that $ v $ is a classical solution of (60) from Remark 18.

    Assume now that $ H $ is locally Lipschitz. Since $ v_{\alpha}\in H^{2}\left(0,\ell_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $, we may use Remark 19 and obtain that $ v\in C^{2,1}\left(\Gamma\right) $.

    Lemma 3.3. Assume (61). If $ v,u\in C^{2}\left(\Gamma\right) $ satisfy

    $ {μα2v+H(x,v)+λvμα2u+H(x,u)+λu,ifxΓαV,αA,αAiγiαμααv(νi)αAiγiαμααu(νi),ifνiV,
    $
    (68)

    then $ v\ge u $.

    Proof of Lemma 3.3. The proof is reminiscent of an argument in [8]. Suppose by contradiction that $ \delta: = \max_\Gamma \left\{ u-v\right\} >0 $. Let $ x_{0}\in\Gamma_{\alpha} $ be a maximum point of $ u-v $. It suffices to consider the case when $ x_0 \in \mathcal{V} $, since if $ x_0 \in \Gamma \backslash \mathcal{V} $, then $ u\left(x_{0}\right)>v\left(x_{0}\right) $, $ \partial u\left(x_{0}\right) = \partial v\left(x_{0}\right) $, $ \partial^{2}u\left(x_{0}\right)\le\partial^{2}v\left(x_{0}\right) $, and we obtain a contradiction with the first line of (68).

    Now consider the case when $ x_{0} = \nu_{i}\in \mathcal{V} $; from Remark 1, we can assume without restriction that $ \pi_{\alpha}\left(0\right) = \nu_{i} $. Since $ u-v $ achieves its maximum over $ \Gamma $ at $ \nu_{i} $, we obtain that $ \partial_{\beta}u\left(\nu_{i}\right) \ge\partial_{\beta}v\left(\nu_{i}\right) $, for all $ \beta\in\mathcal{A}_{i} $. From Kirchhoff conditions in (68), this implies that $ \partial_{\beta}u\left(\nu_{i}\right) = \partial_{\beta}v\left(\nu_{i}\right) $, for all $ \beta\in\mathcal{A}_{i} $. It follows that $ \partial v_{\alpha}(0) = \partial u_{\alpha}(0) $. Using the first line of (68), we get that

    $ -\mu_{\alpha}\left[\partial^{2}v_{\alpha}(0)-\partial^{2}u_{\alpha}(0)\right]\ge\underset{ = 0}{\underbrace{H_{\alpha}\left(0,\partial u_{\alpha}(0)\right)-H_\alpha\left(0,\partial v_{\alpha}(0\right))}}+\lambda\left(u_{\alpha}(0)-v_{\alpha}(0)\right) > 0. $

    Therefore, $ u_{\alpha}-v_{\alpha} $ is locally strictly convex in $ [0,\ell_{\alpha}] $ near $ 0 $ and its first order derivative vanishes at $ 0 $. This contradicts the fact that $ \nu_{i} $ is the maximum point of $ u-v $.

    We now turn to Proposition 1.

    Proof of Proposition 1. We adapt the classical proof of Boccardo, Murat and Puel in [6]. First of all, we truncate the Hamiltonian as follows:

    $ H_{n}\left(x,p\right) = {H(x,p),if |p|n,H(x,p|p|n),if |p|>n.
    $

    By Lemma 3.2, for all $ n\in\mathbb{N} $, since $ H_{n}\left(x,p\right) $ is continuous and bounded by $ C_{2}\left(1+n^{2}\right) $, there exists a classical solution $ v_{n}\in C^{2}\left(\Gamma\right) $ for the following boundary value problem

    $ {μα2v+Hn(x,v)+λv=0,xΓαV,αA,v|Γα(νi)=v|Γβ(νi), for all α,βAi,iI,αAiγiαμααv(νi)=0,iI.
    $
    (69)

    We wish to pass to the limit as $ n $ tend to $ +\infty $; we first need to estimate $ v_{n} $ uniformly in $ n $, successively in $ L^{\infty}\left(\Gamma\right) $, $ H^{1}\left(\Gamma\right) $ and $ H^{2}\left(\Gamma\right) $.

    Estimate in $ L^{\infty}\left(\Gamma\right) $. Since $ \left|H_{n}\left(x,p\right)\right|\le c\left(1+\left|p\right|^{2}\right) $ for all $ x,p $, then $ \varphi = -c/\lambda $ and $ \overline{\varphi} = c/\lambda $ are respectively a sub- and super-solution of (69). Therefore, from Lemma 3.3, we obtain $ |\lambda v_n|\le c $.

    Estimate in $ V $. For a positive constant $ K $ to be chosen later, we introduce $ w_{n}: = e^{Kv_{n}^{2}}v_{n}\psi \in W $, where $ \psi $ is given in Definition 1.3. Using $ w_{n} $ as a test function in (69) leads to

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\left(\mu_{\alpha}\partial v_{n}\partial w_{n}+\lambda v_{n}w_{n}\right)dx = -\int_{\Gamma}H_{n}\left(x,\partial v_{n}\right)w_{n}dx. $

    Since $ \left|H_{n}\left(x,p\right)\right|\le c\left(1+p^{2}\right) $, we have

    $ αAΓαeKv2n[(μαψ)(vn)2+(μα2Kψ)v2n(vn)2+(μαψ)vnvn+λψv2n]dxΓeKv2n|Hn(x,vn)||vnψ|dxΓceKv2nψ|vn|dx+ΓcψeKv2n|vn|ψ(vn)2dxΓeKv2n(λψv2n+ψc24λ)dx+αAΓαeKv2n[μα2ψ(vn)2+c22μαψ(vn)2v2n]dx,
    $

    where we have used Young inequalities. Since $ \lambda>0 $ and $ \psi>0 $, we deduce that

    $ αAΓαeKv2n[(μα2ψ)(vn)2+2ψ(μαKc24μα)v2n(vn)2+(μαψ)vnvn]dxc24λΓeKv2nψdx.
    $
    (70)

    Next, choosing $ K> (1+ c^2/ 4 \underline{\mu}) /\underline{\mu} $ yields that

    $ αAΓαeKv2n[μα2ψ(vn)2+2ψv2n(vn)2+(μαψ)vnvn]dxC
    $

    for a positive constant $ C $ independent of $ n $, because $ v_{n} $ is bounded by $ c/\lambda $. Since $ \psi $ is bounded from below by a positive number and $ \partial \psi $ is piecewise constant on $ \Gamma $, we infer that $ \sum_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}e^{Kv_{n}^{2}}v_{n}^{2}\left(\partial v_{n}\right)^{2} \le \widetilde C $, where $ \widetilde C $ is a positive constant independent on $ n $. Using this information and (70) again, we obtain that $ \int_{\Gamma}\left(\partial v_{n}\right)^{2} $ is bounded uniformly in $ n $. There exists a constant $ \overline{C} $ such that $ \left\Vert v_{n}\right\Vert _{V}\le\overline{C} $ for all $ n $.

    Estimate in $ H^{2}\left(\Gamma\right) $. From the differential equation in (69) and (62), we have

    $ \underline{\mu}\left|\partial^{2}v_{n}\right|\le c+c\left|\partial v_{n}\right|^{2}+\lambda\left|v_{n}\right|,\quad\text{for all }{\alpha\in\mathcal{A}}. $

    Thus $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{1}\left(\Gamma\right) $. This and the previous estimate on $ \|\partial v_n\|_{L^2 (\Gamma)} $ yield that $ \partial v_{n} $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, from the continuous embedding of $ W^{1,1}\left(0,\ell_{\alpha}\right) $ into $ C\left(\left[0,\ell_{\alpha}\right]\right) $. Therefore, from (69), we get that $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $. This implies in particular that $ v_{n} $ is uniformly bounded in $ W^{2,\infty}\left(\Gamma\right) $.

    Hence, for any $ \sigma \in (0,1) $, up to the extraction of a subsequence, there exists $ v\in V $ such that $ v_{n}\rightarrow v $ in $ C^{1,\sigma}\left(\Gamma\right) $. This yields that $ H_{n}\left(x,\partial v_{n}\right)\rightarrow H\left(x,\partial v\right) $ for all $ x\in\Gamma $. By Lebesgue's Dominated Convergence Theorem, we obtain that $ v $ is a weak solution of (60), and since $ v\in C^{1,\sigma}(\Gamma) $, by Remark 18, $ v $ is a classical solution of (60).

    Assume now that $ H $ is locally Lipschitz. We may use Remark 19 and obtain that $ v\in C^{2,1}\left(\Gamma\right) $.

    The proof is complete.

    For $ f\in PC\left(\Gamma\right) $, we wish to prove the existence of $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $ such that

    $ {μα2v+H(x,v)+ρ=f(x),in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (71)

    with the normalization condition

    $ Γvdx=0.
    $
    (72)

    Theorem 3.4. Assume (25)-(27). There exists a unique couple $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $ satisfying (71)-(72), with $ \left|\rho\right|\le\max_{x\in\Gamma}\left|H\left(x,0\right)-f\left(x\right)\right| $. There exists a constant $ \overline{C} $ which only depends upon $ \left\Vert f\right\Vert _{L^{\infty}\left(\Gamma\right)},\mu_{\alpha} $ and the constants in (27) such that

    $ vC2(Γ)¯C.
    $
    (73)

    Moreover, for some $ \sigma\in\left(0,1\right) $, if $ f_{\alpha}\in C^{0,\sigma}([0,\ell_{\alpha}]) $ for all $ \alpha\in\mathcal{A} $, then $ \left(v,\rho\right)\in C^{2,\sigma}\left(\Gamma\right)\times\mathbb{R} $; there exists a constant $ \overline{C} $ which only depends upon $ \left\Vert f_{\alpha}\right\Vert _{C^{0,\sigma}\left([0,\ell_{\alpha}]\right)},\mu_{\alpha} $ and the constants in (27) such that

    $ vC2.σ(Γ)¯C.
    $
    (74)

    Proof of existence in Theorem 3.4. By Proposition 1, for any $ \lambda>0 $, the following boundary value problem

    $ {μα2v+H(x,v)+λv=f,in ΓαV,αA,v|Γα(νi)=v|Γβ(νi),α,βAi,iI,αAiγiαμααv(νi)=0,iI,
    $
    (75)

    has a unique solution $ v_{\lambda}\in C^{2}\left(\Gamma\right) $. Set $ C: = \max_{\Gamma}\left|f\left(\cdot\right)-H\left(\cdot,0\right)\right| $. The constant functions $ \varphi: = - C/\lambda $ and $ \overline{\varphi} = C/\lambda $ are respectively sub- and super-solution of (75). By Lemma 3.3,

    $ Cλvλ(x)C,for all xΓ.
    $
    (76)

    Next, set $ u_{\lambda}: = v_{\lambda}-\min_{\Gamma}v_{\lambda} $. We see that $ u_{\lambda} $ is the unique classical solution of

    $ {μα2uλ+H(x,uλ)+λuλ+λminΓvλ=f,in ΓαV,αA,u|Γα(νi)=u|Γβ(νi),α,βAi,iI,αAiγiαμααuλ(νi)=0,iI.
    $
    (77)

    Before passing to the limit as $ \lambda $ tends $ 0 $, we need to estimate $ u_{\lambda} $ in $ C^{2}\left(\Gamma\right) $ uniformly with respect to $ \lambda $. We do this in two steps:

    Step 1. Estimate of $ \|\partial u_{\lambda}\|_{L^{q}\left(\Gamma\right)} $. Using $ \psi $ as a test-function in (77), see Definition 1.3, and recalling that $ \lambda u_{\lambda}+\lambda\min_{\Gamma}v_{\lambda} = \lambda v_{\lambda} $, we see that

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial u_{\lambda}\partial\psi dx+\int_{\Gamma}\left(H\left(x,\partial u_{\lambda}\right)+\lambda v_{\lambda}\right)\psi dx = \int_{\Gamma}f\psi dx. $

    From (27) and (76),

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial u_{\lambda}\partial\psi dx+\sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}C_0\left|\partial u_{\lambda}\right|^{q}\psi dx\le\int_{\Gamma}\left(f+C+C_1\right)\psi dx. $

    On the other hand, since $ q>1 $, $ \psi\ge\underline{\psi}>0 $ and $ \partial \psi $ is bounded, there exists a large enough positive constant $ C' $ such that

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial u_{\lambda}\partial\psi dx+\dfrac{1}{2}\sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}C_0\left|\partial u_{\lambda}\right|^{q}\psi dx+C' > 0,\quad\text{for all }{\lambda > 0}. $

    Subtracting, we get $ \dfrac{C_0}{2}\underline{\psi}\int_{\Gamma}\left|\partial u_{\lambda}\right|^{q}dx\le\int_{\Gamma}\left(f+C+C_1\right)\psi dx+C' $. Hence, for all $ \lambda>0 $,

    $ uλLq(Γ)˜C,
    $
    (78)

    where $ \widetilde {C}: = \left[\left(2\int_{\Gamma}\left(|f|+C+C_1\right)\psi dx+2C'\right)/(C_0 \underline{\psi})\right]^{1/q} $.

    Step 2. Estimate of $ \|u_{\lambda}\|_{C^{2}\left(\Gamma\right)} $. Since $ u_{\lambda} = v_{\lambda}-\min_{\Gamma}v_{\lambda} $, there exists $ \alpha\in \mathcal A $ and $ x_{\lambda}\in\Gamma_{\alpha} $ such that $ u_{\lambda}\left(x_{\lambda}\right) = 0 $. For all $ \lambda>0 $ and $ x\in\Gamma_{\alpha} $, we have

    $ \left|u_{\lambda}\left(x\right)\right| = \left|u_{\lambda}\left(x\right)-u_{\lambda}\left(x_{\lambda}\right)\right|\le\int_{\Gamma}\left|\partial u_{\lambda}\right|dx\le\left\Vert \partial u_{\lambda}\right\Vert _{L^{q}\left(\Gamma\right)}\left|\Gamma\right|^{q/\left(q-1\right)}. $

    From (78) and the latter inequality, we deduce that $ \left\Vert u_{\lambda}|_{\Gamma_{\alpha}}\right\Vert _{L^\infty\left(\Gamma_{\alpha}\right)}\le \widetilde{C}\left|\Gamma\right|^{q/\left(q-1\right)} $. Let $ \nu_{i} $ be a transition vertex which belongs to $ \partial\Gamma_{\alpha} $. For all $ \beta\in\mathcal{A}_{i} $, $ y\in\Gamma_{\beta} $,

    $ \left|u_{\lambda}\left(y\right)\right|\le\left|u_{\lambda}\left(y\right)-u_{\lambda}\left(\nu_{i}\right)\right|+\left|u_{\lambda}\left(\nu_{i}\right)\right|\le2\widetilde{C}\left|\Gamma\right|^{q/\left(q-1\right)}. $

    Since the network is connected and the number of edges is finite, repeating the argument as many times as necessary, we obtain that there exists $ M\in\mathbb{N} $ such that

    $ \left\Vert u_{\lambda}\right\Vert _{L^{\infty}\left(\Gamma\right)}\le M\widetilde{C}\left|\Gamma\right|^{q/\left(q-1\right)}. $

    This bound is uniform with respect to $ \lambda\in (0,1] $. Next, from (77) and (29), we get

    $ \underline{\mu}\left|\partial^{2}u_{\lambda}\right|\le\left|H\left(x,\partial u_{\lambda}\right)\right|+\left|\lambda v_{\lambda}\right|+\left|f\right|\le C_{q}\left(1+\left|\partial u_{\lambda}\right|^{q}\right)+C+\left\Vert f\right\Vert _{L^{\infty}\left(\Gamma\right)}. $

    Hence, from (78), $ \partial^{2}u_{\lambda} $ is bounded in $ L^{1}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $. From the continuous embedding of $ W^{1,1}\left(0,\ell_{\alpha}\right) $ in $ C([0,\ell_{\alpha}]) $, we infer that $ \partial u_{\lambda}|_{\Gamma_\alpha} $ is bounded in $ C(\Gamma_\alpha) $ uniformly with respect to $ \lambda\in (0,1] $. From the equation (77) and (76), this implies that $ u_{\lambda} $ is bounded in $ C^{2}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $.

    After the extraction of a subsequence, we may assume that when $ \lambda\rightarrow0^{+} $, the sequence $ u_{\lambda} $ converges to some function $ v\in C^{1,1}\left(\Gamma\right) $ and that $ \lambda\min v_{\lambda} $ converges to some constant $ \rho $. Notice that $ v $ still satisfies the Kirchhoff conditions since $ \partial u_{\lambda}|_{\Gamma_{\alpha}}\left(\nu_{i}\right)\rightarrow\partial v|_{\Gamma_{\alpha}}\left(\nu_{i}\right) $ as $ \lambda\rightarrow0^{+} $. Passing to the limit in (77), we get that the couple $ \left(v,\rho\right) $ satisfies (71) in the weak sense, then in the classical sense by using an argument similar to Remark 18. Adding a constant to $ v $, we also get (72).

    Furthermore, if for some $ \sigma\in (0,1) $, $ f|_{\Gamma_{\alpha}}\in C^{0,\sigma}\left(\Gamma_{\alpha}\right) $ for all $ \alpha\in\mathcal{A} $, a bootstrap argument using the Lipschitz continuity of $ H $ on the bounded subsets of $ \Gamma\times \mathbb R $ shows that $ u_{\lambda} $ is bounded in $ C^{2,\sigma}\left(\Gamma\right) $ uniformly with respect to $ \lambda\in (0,1] $. After a further extraction of a subsequence if necessary, we obtain (74).

    Proof of uniqueness in Theorem 3.4. Assume that there exist two solutions $ \left(v,\rho\right) $ and $ \left(\tilde{v},\tilde{\rho}\right) $ of (71)-(72). First of all, we claim that $ \rho = \tilde{\rho} $. By symmetry, it suffices to prove that $ \rho \ge\tilde{\rho} $. Let $ x_{0} $ be a maximum point of $ e: = \tilde{v}-v $. Using similar arguments as in the proof of Lemma 3.3, with $ \lambda v $ and $ \lambda u $ respectively replaced by $ \rho $ and $ \tilde{\rho} $, we get $ \rho\ge\tilde{\rho} $ and the claim is proved.

    We now prove the uniqueness of $ v $. Since $ H_{\alpha} $ belongs to $ C^{1}\left(\Gamma_{\alpha}\times\mathbb{R}\right) $ for all $ \alpha\in\mathcal{A} $, then $ e $ is a solution of $ \mu_{\alpha}\partial^{2}e_{\alpha}-\left[\int_{0}^{1} \partial_p H_{\alpha}\left(y,\theta\partial v_{\alpha}+\left(1-\theta\right)\partial \tilde{v}_{\alpha}\right)d\theta\right]\partial e_{\alpha} = 0,\quad\text{in }(0,\ell_{\alpha}) $, with the same transmission and boundary condition as in (71). By Lemma 2.5, $ e $ is a constant function on $ \Gamma $. Moreover, from (72), we know that $ \int_{\Gamma}edx = 0 $. This yields that $ e = 0 $ on $ \Gamma $. Hence, (71)-(72) has a unique solution.

    Remark 20. Since there exists a unique solution of (71)-(72), we conclude that the whole sequence $ \left(u_{\lambda},\lambda v_{\lambda}\right) $ in the proof of Theorem 3.4 converges to $ \left(v,\rho\right) $ as $ \lambda\rightarrow0 $.

    We first prove Theorem 1.6 when $ F $ is bounded.

    Theorem 4.1. Assume (25)-(28), (30) and that $ F $ is bounded. There exists a solution $ \left(v,m,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ to the mean field games system (24). If $ F $ is locally Lipschitz continuous, then $ v\in C^{2,1}(\Gamma) $. If furthermore $ F $ is strictly increasing, then the solution is unique.

    Proof of existence in Theorem 4.1. We adapt the proof of Camilli and Marchi in [7,Theorem 1]. For $ \sigma\in (0,1/2) $ let us introduce the space

    $ \begin{array}[c]{ll} \mathcal{M}_{\sigma} = \left\{ m:m_{\alpha}\in C^{0,\sigma}\left(\left[0,\ell_{\alpha}\right]\right)\text{  and }\dfrac{m|_{\Gamma_{\alpha}}\left(\nu_{i}\right)}{\gamma_{i\alpha}} = \dfrac{m|_{\Gamma_{\beta}}\left(\nu_{i}\right)}{\gamma_{i\beta}} \quad \left |     \begin{array}[c]{l}       \hbox{ for all }i\in I\\       \hbox{ and } \alpha,\beta\in\mathcal{A}_{i}     \end{array}
    \right. \right\} \end{array} $

    which, endowed with the norm

    $ { \left\Vert m\right\Vert _{\mathcal{M}_{\sigma}} = \left\Vert m\right\Vert _{L^{\infty}\left(\Gamma\right)}+\max\limits_{\alpha\in\mathcal{A}}\sup\limits_{y,z\in\left[0,\ell_{\alpha}\right],y\ne z}\dfrac{\left|m_{\alpha}\left(y\right)-m_{\alpha}\left(z\right)\right|}{\left|y-z\right|^{\sigma}}}, $

    is a Banach space. Now consider the set

    $ \mathcal{K} = \left\{ m\in\mathcal{M}_{\sigma}:m\ge0\text{ and }\int_{\Gamma}mdx = 1\right\} $

    and observe that $ \mathcal{K} $ is a closed and convex subset of $ \mathcal{M}_{\sigma} $. We define a map $ T:\mathcal{K}\rightarrow\mathcal{K} $ as follows: given $ m\in\mathcal{K} $, set $ f = F\left(m\right) $. By Theorem 3.4, (71)-(72) has a unique solution $ \left(v,\rho\right)\in C^{2}\left(\Gamma\right)\times\mathbb{R} $. Next, for $ v $ given, we solve (49)-(50) with $ b\left(\cdot\right) = \partial_{p}H\left(\cdot,\partial v\left(\cdot\right)\right)\in PC(\Gamma) $. By Theorem 2.7, there exists a unique solution $ \overline{m}\in\mathcal{K}\cap W $ of (49)-(50). We set $ T\left(m\right) = \overline{m} $; we claim that $ T $ is continuous and has a precompact image. We proceed in several steps:

    $ T $ is continuous. Let $ m_{n},m\in\mathcal{K} $ be such that $ \left\Vert m_{n}-m\right\Vert _{\mathcal{M}_{\sigma}}\rightarrow0 $ as $ n\rightarrow+\infty $; set $ \overline{m}_{n} = T\left(m_{n}\right),\overline{m} = T\left(m\right) $. We need to prove that $ \overline{m}_{n}\rightarrow\overline{m} $ in $ \mathcal{M}_{\sigma} $. Let $ \left(v_{n},\rho_{n}\right),\left(v,\rho\right) $ be the solutions of (71)-(72) corresponding respectively to $ f = F\left(m_{n}\right) $ and $ f = F\left(m\right) $. Using estimate (73), we see that up to the extraction of a subsequence, we may assume that $ \left(v_{n},\rho_{n}\right)\rightarrow\left(\overline{v},\overline{\rho}\right) $ in $ C^{1}\left(\Gamma\right)\times\mathbb{R} $. Since $ F\left(m_{n}\right)|_{\Gamma_\alpha}\rightarrow F\left(m\right)|_{\Gamma_\alpha} $ in $ C\left(\Gamma_\alpha\right) $, $ H_{\alpha}\left(y,(\partial v_{n})_{\alpha}\right)\rightarrow H_{\alpha}\left(y,\partial\overline{v}_{\alpha}\right) $ in $ C\left([0,\ell_{\alpha}]\right) $, and since it is possible to pass to the limit in the transmission and boundary conditions thanks to the $ C^{1} $-convergence, we obtain that $ \left(\overline{v},\overline{\rho}\right) $ is a weak (and strong by Remark 18) solution of (71)-(72). By uniqueness, $ (\overline{v},\overline{\rho}) = (v,\rho) $ and the whole sequence $ \left(v_{n},\rho_{n}\right) $ converges.

    Next, $ \overline{m}_{n} = T\left(m_{n}\right),\overline{m} = T\left(m\right) $ are respectively the solutions of (49)-(50) corresponding to $ b = \partial_{p}H\left(x,\partial v_{n}\right) $ and $ b = \partial_{p}H\left(x,\partial v\right) $. From the estimate (53), since $ \partial_{p}H\left(x,\partial v_{n}\right) $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, we see that $ \overline{m}_{n} $ is uniformly bounded in $ W $. Therefore, up to the extraction of subsequence, $ \overline{m}_{n}\rightharpoonup\widehat{m} $ in $ W $ and $ \overline{m}_{n}\rightarrow\widehat{m} $ in $ \mathcal{M}_\sigma $, because $ W $ is compactly embedded in $ \mathcal{M}_\sigma $ for $ \sigma\in (0,1/2) $. It is easy to pass to the limit and find that $ \widehat{m} $ is a solution of (49)-(50) with $ b = \partial_{p}H\left(x,\partial v\right) $. From Theorem 2.7, we obtain that $ \overline{m} = \widehat{m} $, and hence the whole sequence $ \overline{m}_{n} $ converges to $ \overline{m} $.

    The image of $ T $ is precompact. Since $ F\in C^{0}\left(\mathbb{R}^{+};\mathbb{R}\right) $ is a uniformly bounded function, we see that $ F\left(m\right) $ is bounded in $ L^{\infty}\left(\Gamma\right) $ uniformly with respect to $ m\in \mathcal{K} $. From Theorem 3.4, there exists a constant $ \overline{C} $ such that for all $ m\in\mathcal{K} $, the unique solution $ v $ of (71)-(72) with $ f = F(m) $ satisfies $ \left\Vert v\right\Vert _{C^{2}\left(\Gamma\right)}\le\overline{C} $. From Theorem 2.7, we obtain that $ \overline{m} = T\left(m\right) $ is bounded in $ W $ by a constant independent of $ m $. Since $ W $ is compactly embedded in $ \mathcal{M}_\sigma $, for $ \sigma\in (0,1/2) $ we deduce that $ T $ has a precompact image.

    End of the proof. We can apply Schauder fixed point theorem (see [18,Corollary 11.2]) to conclude that the map $ T $ admits a fixed point $ m $. By Theorem 2.7, we get $ m\in \mathcal{W} $. Hence, there exists a solution $ (v,m,\rho)\in C^2(\Gamma)\times \mathcal{W}\times \mathbb{R} $ to the mean field games system (24). If $ F $ is locally Lipschitz continuous, then $ v\in C^{2, 1}(\Gamma) $ from the final part of Theorem 3.4.

    Proof of uniqueness in Theorem 4.1. We assume that $ F $ is strictly increasing and that there exist two solutions $ \left(v_{1},m_{1},\rho_{1}\right) $ and $ \left(v_{2},m_{2},\rho_{2}\right) $ of (24). We set $ \overline{v} = v_{1}-v_{2},\overline{m} = m_{1}-m_{2} $ and $ \overline{\rho} = \rho_{1}-\rho_{2} $ and write the equations for $ \overline{v},\overline{m} $ and $ \overline{\rho} $

    $ {μα2¯v+H(x,v1)H(x,v2)+¯ρ(F(m1)F(m2))=0,in ΓαV,μα2¯m(m1pH(x,v1))+(m2pH(x,v2))=0,in ΓαV,¯v|Γα(νi)=¯v|Γβ(νi),¯m|Γα(νi)γiα=¯m|Γβ(νi)γiβ,α,βAi,iI,αAiγiαμαα¯v(νi)=0,iI,αAiniα[m1|Γα(νi)pH(νi,v1|Γα(νi))m2|Γα(νi)pH(νi,v2|Γα(νi))]+αAiμαα¯m(νi)=0,iI,Γ¯vdx=0,Γ¯mdx=0.
    $
    (79)

    Multiplying the equation for $ \overline{v} $ by $ \overline{m} $ and integrating over $ \Gamma_{\alpha} $, we get

    $ Γαμα¯v¯m+[H(x,v1)H(x,v2)+¯ρ(F(m1)F(m2))]¯mdx[μα¯mα¯vα]α0=0.
    $
    (80)

    Multiplying the equation for $ \overline{m} $ by $ \overline{v} $ and integrating over $ \Gamma_{\alpha} $, we get

    $ Γαμα¯v¯m+[m1pH(x,v1)m2pH(x,v2)]¯vdx[¯v|Γα(μα¯m|Γα+m1|ΓαpH(x,v1|Γα)m2|ΓαpH(x,v2|Γα))]α0=0.
    $
    (81)

    Subtracting (80) to (81), summing over $ \alpha\in\mathcal{A} $, assembling the terms corresponding to a same vertex $ \nu_{i} $ and taking into account the transmission and the normalization condition for $ \overline{v} $ and $ \overline{m} $, we obtain

    $ 0=αAΓα(m1m2)[F(m1)F(m2)]dx+αAΓαm1[H(x,v2)H(x,v1)+pH(x,v1)¯v]dx+αAΓαm2[H(x,v1)H(x,v2)pH(x,v2)¯v]dx.
    $

    Since $ F $ is strictly monotone then the first sum is non-negative. Moreover, by the convexity of $ H $ and the positivity of $ m_{1},m_{2} $, the last two sums are non-negative. Therefore, we have that $ m_{1} = m_{2} $. From Theorem 3.4, we finally obtain $ v_{1} = v_{2} $ and $ \rho_{1} = \rho_{2} $.

    Proof of Theorem 1.6 for a general coupling $ F $. We only need to modify the proof of existence.

    We now truncate the coupling function as follows:

    $ F_{n}\left(r\right) = {F(r),if |r|n,F(r|r|n),if |r|n.
    $

    Then $ F_{n} $ is continuous, bounded below by $ -M $ as in (31) and bounded above by some constant $ C_n $. By Theorem 4.1, for all $ n\in\mathbb{N} $, there exists a unique solution $ \left(v_{n},m_{n},\rho_{n}\right)\in C^{2}\left(\Gamma\right)\times\mathcal{W}\times\mathbb{R} $ of the mean field game system (24) where $ F $ is replaced by $ F_{n} $. We wish to pass to the limit as $ n\to +\infty $. We proceed in several steps:

    Step 1. $ \rho_{n} $ is bounded from below. Multiplying the HJB equation in (24) by $ m_{n} $ and the Fokker-Planck equation in (24) by $ v_{n} $, using integration by parts and the transmission conditions, we obtain that

    $ αAΓαμαvnmndx+ΓH(x,vn)mndx+ρn=ΓFn(mn)mndx,
    $
    (82)

    and

    $ αAΓαμαvnmndx+ΓpH(x,vn)mnvndx=0.
    $
    (83)

    Subtracting the two equations, we obtain

    $ ρn=ΓFn(mn)mndx+Γ[pH(x,vn)vnH(x,vn)]mndx.
    $
    (84)

    In what follows, the constant $ C $ may vary from line to line but remains independent of $ n $. From (26), we see that $ \partial_{p}H\left(x,\partial v_{n}\right)\partial v_{n}-H\left(x,\partial v_{n}\right)\ge -H(x,0)\ge -C $. Therefore

    $ ρnΓFn(mn)mndxCΓmndx=ΓFn(mn)mndxC.
    $
    (85)

    Hence, since $ F_n+M\ge0 $ and $ \int_{\Gamma}m_n dx = 1 $, we get that $ \rho_{n} $ is bounded from below by $ -M-C $ independently of $ n $.

    Step 2. $ \rho_{n} $ and $ \int_{\Gamma}F_{n}\left(m_{n}\right)dx $ are uniformly bounded. By Theorem 2.7, there exists a positive solution $ w\in W $ of (49)-(50) with $ b = 0 $. It yields

    $ { \sum_{\alpha\in\mathcal{A}} \int_{\Gamma_{\alpha}}\mu_{\alpha}\partial w\partial udx = 0}, \text{for all }u\in V, \quad\quad \hbox{and}\quad \int_{\Gamma}wdx = 1. $

    Multiplying the HJB equation of (24) by $ w $, using integration by parts and the Kirchhoff condition, we get

    $ \underset{ = 0}{\underbrace{\sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial v_{n}\partial wdx}}+\int_{\Gamma}H\left(x,\partial v_{n}\right)wdx+\rho_{n}\underset{ = 1}{\underbrace{\int_{\Gamma}wdx}} = \int_{\Gamma}F_{n}\left(m_{n}\right)wdx. $

    This implies, using (27), (53) and $ F_n+M\ge0 $,

    $ ρn=ΓFn(mn)wdxΓH(x,vn)wdxwL(Γ)Γ(Fn(mn)+M)dxMΓ(C0|vn|qC1)wdxCΓFn(mn)dx+CΓC0|vn|qwdx.
    $
    (86)

    Thus, by (85), we have

    $ MCΓFn(mn)mndxCρnCΓFn(mn)dx+C.
    $
    (87)

    Let $ K>0 $ be a constant to be chosen later. We have

    $ ΓFn(mn)dx1KmnK[Fn(mn)+M]mndx+sup0rKF(r)mnKdx1KΓFn(mn)mndx+MK+CK,
    $
    (88)

    where $ C_K $ is independent of $ n $. Choosing $ K = 2C $ where $ C $ is the constant in (87), we get by combining (88) with (87) that $ \int_{\Gamma}F_n(m_n)m_n\le C $. Using (88) again, we obtain $ \int_{\Gamma}F_n(m_n)dx\le C $. Hence, from (87), we conclude that $ |\rho_n|+ |\int_{\Gamma}F_n(m_n)dx|\le C $.

    Step 3. Prove that $ F_{n}\left(m_{n}\right) $ is uniformly integrable and $ v_{n} $ and $ m_{n} $ are uniformly bounded respectively in $ C^{1}\left(\Gamma\right) $ and $ W $. Let $ E $ be measurable with $ |E| = \eta $. By (88) with $ \Gamma $ is replaced by $ E $, we have

    $ EFn(mn)mndx1KE{mnK}Fn(mn)mndx+MK+sup0rKF(r)E{mnK}dxC+MK+CKη,
    $

    since $ \int_{E} F_n(m_n) m_n dx\le C $ and $ \sup_{0\le r \le K}F_{n}(r)\le \sup_{0\le r \le K}F(r): = C_K $. Therefore, for all $ \varepsilon>0 $, we may choose $ K $ such that $ (C+M)/K\le\varepsilon/2 $ and then $ \eta $ such that $ C_K \eta\le\varepsilon/2 $ and get

    $ \int_{E}F_{n}\left(m_{n}\right)dx\le \varepsilon,\quad\text{for all }{E}\text{ which satisfies }{\left|E\right|\le\eta}, $

    which proves the uniform integrability of $ \left\{F_n (m_n) \right\}_n $.

    Next, since $ \rho_{n} $ and $ \int_{\Gamma}F_{n}\left(m_{n}\right)dx $ are uniformly bounded, we infer from (86) that $ \partial v_{n} $ is uniformly bounded in $ L^{q}\left(\Gamma\right) $. Since by the condition $ \int_{\Gamma}v_ndx = 0 $, there exists $ \overline{x}_n $ such that $ v_n(\overline{x}_n) = 0 $, we infer from the latter bound that $ v_n $ is uniformly bounded in $ L^{\infty}(\Gamma) $.

    Using the HJB equation in (24) and Remark 7, we get

    $ \mu_{\alpha}|\partial^2 v_n|\le |H(x,\partial v_n)| +|F_n(m_n)|+|\rho_n| \le C_q (|\partial v_n|^q+1)+|F_n(m_n)|+|\rho_n|. $

    We obtain that $ \partial^{2}v_{n} $ is uniformly bounded in $ L^{1}\left(\Gamma\right) $, which implies that $ v_n $ is uniformly bounded in $ C^1(\Gamma) $. Therefore the sequence of functions $ C_q (|\partial v_n|^q+1)+|F_n(m_n)|+|\rho_n| $ is uniformly integrable, and so is $ \partial^{2}v_{n} $. This implies that $ \partial v_{n} $ is equicontinuous. Hence, $ \left\{ v_{n}\right\} $ is relatively compact in $ C^{1}\left(\Gamma\right) $ by Arzelà-Ascoli's theorem. Finally, from the Fokker-Planck equation and Theorem 2.7, since $ \partial_{p}H\left(x,\partial v_{n}\right) $ is uniformly bounded in $ L^{\infty}\left(\Gamma\right) $, we obtain that $ m_{n} $ is uniformly bounded in $ W $.

    Step 4. Passage to the limit

    From Step 1 and 2, since $ \left\{ \rho_{n}\right\} $ is uniformly bounded, there exists $ \rho\in\mathbb{R} $ such that $ \rho_{n}\rightarrow\rho $ up to the extraction of subsequence. From Step 3, there exists $ m\in W $ such that $ m_{n}\rightharpoonup m $ in $ W $ and $ m_{n}\to m $ almost everywhere, up to the extraction of subsequence. Also from Step 3, since $ F_{n}\left(m_{n}\right) $ is uniformly integrable, from Vitali theorem, $ \lim_{n\rightarrow\infty}\int_{\Gamma}F_{n}\left(m_{n}\right)\tilde{w}dx = \int_{\Gamma}F\left(m\right)\tilde{w}dx,\quad\text{for all }\tilde{w}\in W $. From Step 3, up to the extraction of subsequence, there exists $ v\in C^{1}\left(\Gamma\right) $ such that $ v_{n}\rightarrow v $ in $ C^{1}\left(\Gamma\right) $. Hence, $ \left(v,\rho,m\right) $ satisfies the weak form of the MFG system:

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial v\partial\tilde{w}dx+\int_{\Gamma} \left(H\left(x,\partial v\right)+\rho\right)\tilde{w}dx = \int_{\Gamma}F\left(m\right)\tilde{w}dx,\quad\text{for all }\tilde{w}\in W, $

    and

    $ \sum\limits_{\alpha\in\mathcal{A}}\int_{\Gamma_{\alpha}}\mu_{\alpha}\partial m\partial\tilde{v}dx+\int_{\Gamma}\partial_{p}H\left(x,\partial v\right)m\partial\tilde{v}dx = 0,\quad\text{for all }\tilde{v}\in V. $

    Finally, we prove the regularity for the solution of (24). Since $ m\in W $, $ m|_{\Gamma_\alpha}\in C^{0,\sigma} $ for some constant $ \sigma\in (0,1/2) $ and all $ \alpha\in \mathcal A $. By Theorem 3.4, $ v\in C^2(\Gamma) $ ($ v\in C^{2,\sigma}(\Gamma) $ if $ F $ is locally Lipschitz continuous). Then, by Theorem 2.7, we see that $ m\in \mathcal{W} $. If $ F $ is locally Lipschitz continuous, this implies that $ v\in C^{2,1}(\Gamma) $. We also obtain that $ v $ and $ m $ satisfy the Kirchhoff and transmission conditions in (24). The proof is done.



    [1]

    Y. Achdou, Finite difference methods for mean field games, in Hamilton-Jacobi Equations: Approximations, Numerical Analysis And Applications, Lecture Notes in Mathematics, 2074, Springer, Heidelberg, 2013, 1–47.

    [2] Hamilton–Jacobi equations constrained on networks. NoDEA Nonlinear Differential Equations Appl. (2013) 20: 413-445.
    [3] Mean field games: numerical methods. SIAM J. Numer. Anal. (2010) 48: 1136-1162.
    [4] Hamilton-Jacobi equations for optimal control on junctions and networks. ESAIM Control Optim. Calc. Var. (2015) 21: 876-899.
    [5] Convergence of a finite difference scheme to weak solutions of the system of partial differential equation arising in mean field games. SIAM J. Numer. Anal. (2016) 54: 161-186.
    [6]

    L. Boccardo, F. Murat and J.-P. Puel, Existence de solutions faibles pour des équations elliptiques quasi-linéaires à croissance quadratique, in Nonlinear Partial Differential Equations and Their Applications, Res. Notes in Math, 84, Pitman, Boston, Mass.-London, 1983, 19–73.

    [7] Stationary mean field games systems defined on networks. SIAM J. Control Optim. (2016) 54: 1085-1103.
    [8] The vanishing viscosity limit for Hamilton-Jacobi equations on networks. J. Differential Equations (2013) 254: 4122-4143.
    [9]

    P. Cardaliaguet, Notes on mean field games, Preprint, 2011.

    [10]

    R. Carmona and F. Delarue, Probabilistic Theory of Mean Field Games with Applications. I-II, Springer, 2017.

    [11]

    M.-K. Dao, Ph.D. thesis, 2018.

    [12] Vertex control of flows in networks. Netw. Heterog. Media (2008) 3: 709-722.
    [13]

    A. Ern and J.-L. Guermond, Theory and Practice of Finite Elements, Applied Mathematical Sciences, 159, Springer-Verlag, New York, 2004.

    [14] Homogenization of second order discrete model with local perturbation and application to traffic flow. Discrete Contin. Dyn. Syst. (2017) 37: 1437-1487.
    [15] Diffusion processes on graphs: Stochastic differential equations, large deviation principle. Probab. Theory Related Fields (2000) 116: 181-220.
    [16] Diffusion processes on graphs and the averaging principle. Ann. Probab. (1993) 21: 2215-2245.
    [17]

    M. Garavello and B. Piccoli, Traffic Flow on Networks, AIMS Series on Applied Mathematics, 1, American Institute of Mathematical Sciences, Springfield, MO, 2006.

    [18]

    D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001.

    [19] On the existence of classical solutions for stationary extended mean field games. Nonlinear Anal. (2014) 99: 49-79.
    [20] Local regularity for mean-field games in the whole space. Minimax Theory Appl. (2016) 1: 65-82.
    [21] Time-dependent mean-field games in the subquadratic case. Comm. Partial Differential Equations (2015) 40: 40-76.
    [22]

    O. Guéant, J.-M. Lasry and P.-L. Lions, Mean field games and applications, in Paris-Princeton Lectures on Mathematical Finance 2010, Lecture Notes in Mathematics, 2003, Springer, Berlin, 2011,205–266.

    [23] An invariance principle in large population stochastic dynamic games. J. Syst. Sci. Complex. (2007) 20: 162-172.
    [24] Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized ϵ-Nash equilibria. IEEE Trans. Automat. Control (2007) 52: 1560-1571.
    [25] Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. (2006) 6: 221-251.
    [26] A Hamilton-Jacobi approach to junction problems and application to traffic flows. ESAIM Control Optim. Calc. Var. (2013) 19: 129-166.
    [27]

    C. Imbert and R. Monneau, Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks, Ann. Sci. Éc. Norm. Supér., 50 (2017), 357–448.

    [28] Jeux à champ moyen. Ⅰ. Le cas stationnaire. C. R. Math. Acad. Sci. Paris (2006) 343: 619-625.
    [29] Jeux à champ moyen. Ⅱ. Horizon fini et contrôle optimal. C. R. Math. Acad. Sci. Paris (2006) 343: 679-684.
    [30] Mean field games. Jpn. J. Math. (2007) 2: 229-260.
    [31] Viscosity solutions for junctions: Well posedness and stability. Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. (2016) 27: 535-545.
    [32] Well-posedness for multi-dimensional junction problems with Kirchoff-type conditions. Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. (2017) 28: 807-816.
    [33] Weak solutions to Fokker-Planck equations and mean field games. Arch. Rational Mech. Anal. (2015) 216: 1-62.
    [34] On the weak theory for mean field games systems. Boll. U.M.I. (2017) 10: 411-439.
    [35] Classical solvability of linear parabolic equations on networks. J. Differential Equations (1988) 72: 316-337.
  • This article has been cited by:

    1. Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou, Finite horizon mean field games on networks, 2020, 59, 0944-2669, 10.1007/s00526-020-01816-3
    2. Fabio Camilli, Claudio Marchi, A note on Kazdan–Warner equation on networks, 2022, 15, 1864-8258, 693, 10.1515/acv-2020-0046
    3. Pierre Cardaliaguet, Alessio Porretta, 2020, Chapter 1, 978-3-030-59836-5, 1, 10.1007/978-3-030-59837-2_1
    4. Antonio Siconolfi, Alfonso Sorrentino, Aubry–Mather theory on graphs, 2023, 36, 0951-7715, 5819, 10.1088/1361-6544/acf6ef
    5. Fabio Camilli, Claudio Marchi, A continuous dependence estimate for viscous Hamilton–Jacobi equations on networks with applications, 2024, 63, 0944-2669, 10.1007/s00526-023-02619-y
    6. Jules Berry, Fabio Camilli, Stationary Mean Field Games on networks with sticky transition conditions, 2025, 31, 1292-8119, 18, 10.1051/cocv/2025007
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3697) PDF downloads(296) Cited by(6)

Figures and Tables

Figures(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog