We introduce games associated with second-order partial differential equations given by arbitrary products of eigenvalues of the Hessian. We prove that, as a parameter that controls the step length goes to zero, the value functions of the games converge uniformly to a viscosity solution of the partial differential equation. The classical Monge-Ampère equation is an important example under consideration.
Citation: Pablo Blanc, Fernando Charro, Juan J. Manfredi, Julio D. Rossi. Games associated with products of eigenvalues of the Hessian[J]. Mathematics in Engineering, 2023, 5(3): 1-26. doi: 10.3934/mine.2023066
[1] | Yu Yuan . A monotonicity approach to Pogorelov's Hessian estimates for Monge- Ampère equation. Mathematics in Engineering, 2023, 5(2): 1-6. doi: 10.3934/mine.2023037 |
[2] | Isabeau Birindelli, Kevin R. Payne . Principal eigenvalues for k-Hessian operators by maximum principle methods. Mathematics in Engineering, 2021, 3(3): 1-37. doi: 10.3934/mine.2021021 |
[3] | Connor Mooney, Arghya Rakshit . Singular structures in solutions to the Monge-Ampère equation with point masses. Mathematics in Engineering, 2023, 5(5): 1-11. doi: 10.3934/mine.2023083 |
[4] | Feida Jiang . Weak solutions of generated Jacobian equations. Mathematics in Engineering, 2023, 5(3): 1-20. doi: 10.3934/mine.2023064 |
[5] | Weimin Sheng, Shucan Xia . Interior curvature bounds for a type of mixed Hessian quotient equations. Mathematics in Engineering, 2023, 5(2): 1-27. doi: 10.3934/mine.2023040 |
[6] | Yoshihisa Kaga, Shinya Okabe . A remark on the first p-buckling eigenvalue with an adhesive constraint. Mathematics in Engineering, 2021, 3(4): 1-15. doi: 10.3934/mine.2021035 |
[7] | Diogo Gomes, Julian Gutierrez, Ricardo Ribeiro . A mean field game price model with noise. Mathematics in Engineering, 2021, 3(4): 1-14. doi: 10.3934/mine.2021028 |
[8] | Stefano Biagi, Serena Dipierro, Enrico Valdinoci, Eugenio Vecchi . A Hong-Krahn-Szegö inequality for mixed local and nonlocal operators. Mathematics in Engineering, 2023, 5(1): 1-25. doi: 10.3934/mine.2023014 |
[9] | Biagio Cassano, Lucrezia Cossetti, Luca Fanelli . Spectral enclosures for the damped elastic wave equation. Mathematics in Engineering, 2022, 4(6): 1-10. doi: 10.3934/mine.2022052 |
[10] | Yves Achdou, Ziad Kobeissi . Mean field games of controls: Finite difference approximations. Mathematics in Engineering, 2021, 3(3): 1-35. doi: 10.3934/mine.2021024 |
We introduce games associated with second-order partial differential equations given by arbitrary products of eigenvalues of the Hessian. We prove that, as a parameter that controls the step length goes to zero, the value functions of the games converge uniformly to a viscosity solution of the partial differential equation. The classical Monge-Ampère equation is an important example under consideration.
Dedicated to our friend Giuseppe Rosario Mingione, great mathematician and Master of Regularity, on the occasion of his 50th birthday.
In this article, we introduce a family of games related to second-order partial differential equations (PDEs) given by arbitrary products of eigenvalues of the Hessian. More precisely, given $ \Omega\subset\mathbb{R}^N, $ and $ k $ indices $ i_1, \ldots, i_k\in\{1, \ldots, N\} $ (which could be repeated), we consider PDEs of the form
$ \begin{equation} \left\{ \begin{array}{ll} P_{i_1, ..., i_k} (D^2 u) : = \prod\limits_{j = 1}^k \lambda_{i_j} (D^2u) = f , \qquad & \mbox{ in } \Omega, \\[5pt] u = g , \qquad & \mbox{ on } \partial \Omega. \end{array} \right. \end{equation} $ | (1.1) |
Here, $ \lambda_1\leq \cdots \leq \lambda_N $ denote the eigenvalues of $ D^2 u $ and the right-hand side $ f $ is a continuous, non-negative function. Notice that operators given by $ P_{i_1, ..., i_k} $ are degenerate and do not preserve order in the space of symmetric matrices, the usual condition for ellipticity. Moreover, we want to emphasize that the eigenvalues $ \lambda_{i_1}, \dots, \lambda_{i_k} $ in (1.1) could be repeated. For instance, one could take $ i_1 = i_2 = 1 $ and $ i_3 = 3 $ to produce the equation $ \lambda_1^2 \lambda_3 = f $. Similarly, one could also take $ k = N $ and $ i_j = j $ and consider the product of all the eigenvalues, which corresponds to the classical Monge-Ampère equation, $ \mbox{det}(D^2u) = \prod_{i = 1}^N \lambda_i = f $. We refer the reader to [11,13] for general references on the Monge-Ampère equation.
Our main goal is to design a game whose value functions approximate viscosity solutions to (1.1) as a parameter that controls the step size of the game goes to zero. The connection between games and PDEs has developed significantly over the last decade; see [10,16,19,20,21,24,25] (and we refer also to [12] in connection with mean-field games). We also refer to the recent books [6,15] and the references therein for general references on this program. The relation between games and PDEs has proven fruitful in obtaining qualitative results, see [4], and regularity estimates; see [1,18,23,27]. A game-theoretical interpretation is available in the case of Hamilton-Jacobi equations in the presence of gradient constraints (both in the convex and non-convex settings), see [28]. The case $ k = 1 $ for the smallest eigenvalue $ \lambda_1 $ gives rise to the Dirichlet problem for the convex envelop studied in [22] and it is also related to the truncated Laplacians considered in [2].
Our starting point is the case of only one eigenvalue and $ f = 0 $, which was recently tackled in [5] and has connections with convexity theory. Let us describe the two-player, zero-sum game called "a random walk for $ \lambda_j $" introduced there. Given a domain $ \Omega \subset \mathbb{R}^N, $ $ \varepsilon > 0 $, and a final payoff function $ g :\mathbb{R}^N \setminus \Omega \to \mathbb{R} $, the game is played as follows. The game starts with a token at an initial position $ x_0 \in \Omega $. Player Ⅰ (who wants to minimize the expected payoff) chooses a subspace $ S $ of dimension $ j $, and then Player Ⅱ (who wants to maximize the expected payoff) chooses a unitary vector $ v\in S $. Then, the token moves to $ x\pm \varepsilon v $ with equal probabilities. The game continues until the token leaves the domain at a point $ x_\tau $, and the first player gets $ -g(x_\tau) $ while the second receives $ g(x_\tau) $ (we can think that Player Ⅰ pays the amount $ g(x_\tau) $ to Player Ⅱ). This game has a value function $ u^\varepsilon $ (see below for the precise definition) defined in $ \Omega $, which depends on the step size $ \varepsilon $. One of the main results in [5] is showing that, under an appropriate condition on $ \partial \Omega $, these value functions converge uniformly in $ \overline{\Omega} $ to a continuous limit $ u $ characterized as the unique viscosity solution to
$ \begin{equation} \left\{ \begin{array}{ll} P_j(D^2u) = \lambda_j (D^2u) = 0, \qquad & \mbox{ in } \Omega, \\[5pt] u = g , \qquad & \mbox{ on } \partial \Omega. \end{array} \right. \end{equation} $ | (1.2) |
The right-hand side $ f $ is obtained by considering a running payoff, that is, a nonnegative function $ f:\Omega \to \mathbb{R} $ such that $ \frac12\varepsilon^2 f(x_n) $ represents an amount paid by Player Ⅱ to Player Ⅰ when the token reaches $ x_n $. Then, the game value approximates viscosity solutions to
$ \begin{equation} \left\{ \begin{array}{ll} P_j(D^2u) = \lambda_j (D^2u) = f, \qquad & \mbox{ in } \Omega, \\[5pt] u = g , \qquad & \mbox{ on } \partial \Omega. \end{array} \right. \end{equation} $ | (1.3) |
Now, we introduce a new game that allows us to obtain a product of eigenvalues. Assume a list of eigenvalues $ (\lambda_{i_j})_{j = 1, \dots, k} $ is given. We consider the set
$ \begin{equation} I_\varepsilon^k = \Big\{(\alpha_{j})_{j = 1, \dots, k}\in\mathbb{R}^k: \prod\limits_{j = 1}^k \alpha_{j} = 1 \quad \text{ and }\quad 0 < \alpha_{j} < \phi^2(\varepsilon) \Big\}, \end{equation} $ | (1.4) |
where $ \phi(\varepsilon) $ is a positive function such that
$ \begin{equation} \lim\limits_{\varepsilon\to 0} \phi(\varepsilon) = \infty \qquad {{\rm{and}}} \qquad \lim\limits_{\varepsilon\to0} \varepsilon\, \phi(\varepsilon) = 0 \end{equation} $ | (1.5) |
(for example one can take $ \phi(\varepsilon) = \varepsilon^{-1/2} $). At every round, Player Ⅰ chooses $ k $ positive real numbers $ (\alpha_{j})_{j = 1, \dots, k}\in I_\varepsilon^k $, after which, an index $ j\in\{1, \dots, k\} $ is selected uniformly at random. Then, the players play a round of the game "a random walk for $ \lambda_{i_j} $" described above. Specifically, Player Ⅰ chooses a subspace $ S $ of dimension $ i_j $, Player Ⅱ chooses a unitary vector $ v\in S $, and the token is moved to $ x\pm \varepsilon \sqrt{\alpha_{j}} v $ with equal probabilities. The running payoff at every round is given by $ \frac12\varepsilon^2 [f(x_n)]^{\frac{1}{k}} $ (Player Ⅱ pays this ammount to Player Ⅰ) and the final payoff is given by $ g $ (if $ x_\tau $ denotes the first position outside $ \Omega $, the game ends and Player Ⅰ pays $ g(x_\tau) $ to Player Ⅱ).
Notice that in this game we adjust the length of the token jump according to the corresponding $ \alpha_{j} $, and Player Ⅰ may choose to enlarge the game steps associated with $ \lambda_{i} $ at the expense of shortening others (since the product of the $ \alpha_{j} $ must equal one). Also notice that the restriction $ \phi (\varepsilon) > \sqrt{\alpha_{j}} > 0 $ implies that the maximum step size is bounded as $ |x - (x\pm \varepsilon \sqrt{\alpha_{j}} v)| < \varepsilon\phi(\varepsilon)\to0 $ as $ \varepsilon \to 0 $.
When both players fix their strategies, $ S_I $ for the first player (a choice of $ (\alpha_{j})_{j = 1, \dots, k} $ and $ i_j- $dimensional subspaces $ S $ at every step of the game), and $ S_{II} $ for the second player (a choice of a unitary vector $ v $ in each possible subspace at every step of the game), then we can compute the expected outcome (the amount that Player Ⅱ receives) as
$ \mathbb{E}_{S_I, S_{II}}^{x_0} \left[g (x_\tau)-\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}[f(x_n)]^{\frac{1}{k}}\right]. $ |
Then, the value for Player Ⅰ of the game starting at any given $ x_0 \in \Omega $ is defined as
$ u^\varepsilon_ {{\rm{I}}}(x_0) = \inf\limits_{S_ {{\rm{I}}}}\sup\limits_{S_{ {{\rm{II}}}}}\, \mathbb{E}_{S_{ {{\rm{I}}}}, S_ {{\rm{II}}}}^{x_0} \left[g (x_\tau)-\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}[f(x_n)]^{\frac{1}{k}}\right], $ |
while the value for Player Ⅱ is
$ u^\varepsilon_ {{\rm{II}}}(x_0) = \sup\limits_{S_{ {{\rm{II}}}}}\inf\limits_{S_ {{\rm{I}}}}\, \mathbb{E}_{S_{ {{\rm{I}}}}, S_ {{\rm{II}}}}^{x_0} \left[g (x_\tau)-\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}[f(x_n)]^{\frac{1}{k}}\right]. $ |
When these two values coincide we say that the game has a value.
In our first result, we state that this game has a value and the value verifies an equation in $ \Omega $, called a Dynamic Programming Principle (DPP) in the literature.
Theorem 1. The game has a value
$ u^\varepsilon : = u_{I}^\varepsilon = u_{II}^\varepsilon, $ |
which is characterized as the unique solution to
$ \begin{equation} \left\{ \begin{array}{l} u^\varepsilon (x) = \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ {\text{dim}}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2} u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} v) \right\} \\[5pt] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \quad -\frac12\varepsilon^2 [f(x)]^{\frac{1}{k}} \qquad x \in \Omega, \\[10pt] u^\varepsilon (x) = g(x) \qquad x \not\in \Omega. \end{array} \right. \end{equation} $ | (1.6) |
Let us show intuitively why this holds. At each step, Player Ⅰ chooses $ (\alpha_{j})_{j = 1, \dots, k} $, and then $ j\in\{1, \dots, k\} $ is selected with probability $ \frac{1}{k} $. Player Ⅰ chooses a subspace $ S $ of dimension $ i_j $ and then Player Ⅱ chooses one unitary vector, $ v $, in the subspace $ S $. The token is then moved with probability $ \frac{1}{2} $ to $ x + \varepsilon \sqrt{\alpha_{j}} v $ or $ x - \varepsilon \sqrt{\alpha_{j}} v $. Finally, the expected payoff at $ x $ is given by $ -\frac12\varepsilon^2 [f(x)]^{\frac{1}{k}} $ (the running payoff) plus the expected payoff for the rest of the game. Then, the equation in (1.6) follows by considering all the possibilities (recall that Player Ⅰ seeks to minimize the expected payoff and Player Ⅱ to minimize it).
Our next goal is to look for the limit as $ \varepsilon \to 0 $.
Theorem 2. Assume that $ \Omega $ is strictly convex. Let $ u^\varepsilon $ be the values of the game. We have
$ \begin{equation*} u^\varepsilon \to u \qquad {{as}}\qquad \varepsilon \to 0 \end{equation*} $ |
uniformly in $ \overline{\Omega} $, where $ u $ is the unique viscosity solution to (1.1).
We devote Section 3 to proving the theorems. The uniqueness statement in Theorem 2 follows from [8]; see Remark 2.2 below. We need the convexity of $ \partial \Omega $ to prove that the sequence converges by means of an Arzelà-Ascoli type lemma. In fact, for strictly convex domains, we show that for every point $ y\in\partial \Omega $, a game that starts near $ y $ ends nearby with a high probability regardless of the players' strategies. This allows us to obtain a sort of asymptotic equicontinuity near the boundary, which leads to uniform convergence in the whole $ \overline{\Omega} $. Note that, in general, the value functions $ u^\varepsilon $ are discontinuous in $ \Omega $ since we take discrete steps.
Observe that the result implies the existence of a solution to the PDE. The strict convexity of $ \Omega $ is needed if a solution to the equation $ \lambda_1 = 0 $ is to exist for every continuous boundary data; see [5]. Since our general setting includes this case, we require the strict convexity. However, in some cases, such as $ \lambda_2 = 0 $, this hypothesis may be relaxed; see [5].
Let us see intuitively why $ u $ is a solution to Eq (1.1). By subtracting $ u^\varepsilon (x) $ and dividing by $ \varepsilon^2 $ on both sides we get the term:
$ \frac{u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} v) -2u^\varepsilon (x)+ u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} v)} {\varepsilon^2} $ |
which in the limit approximates the second derivative of $ u $ in the direction of $ v $ multiplied by $ \alpha_{j} $. Hence, by the Courant-Fischer min-max principle, the expression
$ \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \frac{u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} v) -2u^\varepsilon (x)+ u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} v)} {\varepsilon^2} $ |
approximates the $ i_j $ eigenvalue of $ D^2 u (x) $ multiplied by $ \alpha_{j} $. Taking into account the running payoff, we obtain that $ u $ is a solution to
$ [f(x)]^{\frac{1}{k}} = \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \lambda_{i_j}. $ |
Then, the result follows from the identity
$ \left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}} = \inf\limits_{\alpha_{j} > 0, \prod\limits_{j = 1}^k \alpha_{j} = 1} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} \qquad \text{whenever}\ \beta_1, \beta_2, \ldots, \beta_k \geq 0, $ |
which is proved in detail in Lemma 2.4 below.
The Monge-Ampère equation. Notice that when all the eigenvalues are involved, we have a two-player game that approximates solutions to the Monge-Ampère equation $ \mbox{det} (D^2u) = f $. However, in the special case of the Monge-Ampère equation, we can also design a one-player game (a control problem) to approximate the solutions. This game is based on a recent asymptotic mean value formula that characterizes viscosity solutions to the Monge-Ampère equation. In fact, in [3], it is proved that $ u $ is a viscosity solution to the Monge-Ampère equation
$ \det (D^2 u (x)) = f(x) $ |
if and only if
$ u (x) = \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u (x + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u (x - \varepsilon \sqrt{\alpha_i} v_i) \right\} -\frac{\varepsilon^2}{2} [f(x)]^{\frac{1}{n}} + o(\varepsilon^2) $ |
as $ \varepsilon \to 0 $, holds in the viscosity sense. Here we denoted by $ \mathbb{O} $ the set of all orthonormal bases $ V = \{v_1, \ldots, v_N\} $ of $ \mathbb{R}^n $, and $ I_\varepsilon^N $ is given by (1.4).
Now, let us describe a one-player game (control problem). At each play, the player (controller), who aims to minimize the expected total payoff, chooses an orthonormal basis $ V = (v_1, ..., v_N) $ and coefficients $ (\alpha_{i})_{i = 1, \dots, N} \in I_\varepsilon^N $. Then, the new position of the game goes to $ x \pm \varepsilon \sqrt{\alpha_i} v_i $ with equal probability $ 1/(2N) $. In addition, there is a running payoff given by $ -\frac{\varepsilon^2}{2} [f(x)]^{\frac{1}{N}} $ at every play and a final payoff $ g(x) $ (as before the game ends when the token leaves $ \Omega $). Then, the value of the game for any $ x_0 \in \Omega $ for is given by
$ u^\varepsilon (x_0) = \inf\limits_{S_ {{\rm{I}}}} \mathbb{E}_{S_{ {{\rm{I}}}}}^{x_0} \left[g (x_\tau)-\frac12\varepsilon^2\sum\limits_{i = 0}^{\tau-1}[f(x_i)]^{\frac{1}{N}}\right]. $ |
Here, we take the infimum over all the possible player strategies.
For this game we have the following result.
Theorem 3. The game value $ u^\varepsilon $ is the unique solution to
$ \begin{equation*} \label{eq.dpp.MA} \left\{ \begin{array}{l} u^\varepsilon (x) = \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u^\varepsilon (x - \varepsilon \sqrt{\alpha_i} v_i) \right\} -\frac12\varepsilon^2 [f(x)]^{\frac{1}{N}} \\[5pt] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \quad \qquad x \in \Omega, \\[10pt] u^\varepsilon (x) = g(x) \qquad x \not\in \Omega. \end{array} \right. \end{equation*} $ |
Moreover, if we assume that $ \Omega $ is strictly convex. Then,
$ \begin{equation*} u^\varepsilon \to u, \qquad {{as}} \;\varepsilon \to 0, \end{equation*} $ |
uniformly in $ \overline{\Omega} $, where $ u $ is the unique viscosity solution to
$ \begin{equation} \left\{ \begin{array}{ll} \det (D^2u) = \prod\limits_{i = 1}^N \lambda_i (D^2u) = f, \qquad & {{in}} \Omega, \\[5pt] u = g , \qquad & {{on}} \partial \Omega. \end{array} \right. \end{equation} $ | (1.7) |
Notice that here the strict convexity of the domain is a natural assumption for the solvability of (1.7); see [7,26].
The paper is organized as follows: in Section 2, we collect some preliminary results and include the definition of viscosity solutions. In Section 3, we prove our main results concerning the two-player game, Theorems 1 and 2. In Section 4, we include some details for the control problem for Monge-Ampère. Finally, in Section 5, we present a variant of the game for Monge-Ampère that involves an integral average in the corresponding DPP.
We begin by stating the definition of a viscosity solution to (1.1). Recall that $ f:\Omega\to\mathbb{R} $ is a non-negative function. Following [8] (see also [14]), we have that the equation is associated with the cone
$ F = \left\{ M \, : \, \prod\limits_{j = 1}^k \lambda_{i_j} (M) \geq f\text{ and } \lambda_{i_j} (M) \geq 0\, \mbox{ for every } j = 1, \dots, k\right\}. $ |
Here we do not state the definition of viscosity solutions with an explicit reference to the cone, but we prefer to present it using the usual notation; see [9].
We say that $ P $ is a paraboloid if for every $ x, x_0\in\mathbb{R}^N $ we have
$ P(x) = P(x_0)+\langle x-x_0, \nabla P(x_0) \rangle + \frac{1}{2} \langle D^2 P (x_0)(x-x_0), x-x_0\rangle. $ |
Definition 4. A continuous function $ u $ verifies
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 u) = f $ |
in $ \Omega $ in the viscosity sense if
$ 1) $ for every paraboloid $ \phi $ that touches $ u $ from below at $ x_0\in\Omega $ ($ \phi(x_0) = u(x_0) $ and $ \phi\leq u $), and with eigenvalues of the Hessian that verify $ \lambda_{i_j} (D^2\phi (x_0)) \geq 0 $, we have
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2\phi(x)) \leq f(x). $ |
$ 2) $ for every paraboloid $ \psi $ that touches $ u $ from above at $ x_0\in\Omega $ ($ \psi(x_0) = u(x_0) $ and $ \psi\geq u $), we have $ \lambda_{i_j} (D^2\psi(x_0))\geq 0 $ for $ j = 1, \dots, k $ and
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2\psi(x)) \geq f(x) . $ |
Remark 5. The validity of the comparison principle for our equation follows from Theorem 4.9 in [8]. In fact, the map $ \Theta: \Omega \to S^{N\times N} $ given by
$ \Theta(x) = \left\{ M \, : \, \prod\limits_{j = 1}^k \lambda_{i_j} (M) \geq f(x)\text{ and } \lambda_{i_j} (M) \geq 0\, \mbox{ for every } j = 1, \dots, k\right\} $ |
is uniformly upper semicontinuous (since $ f $ is continuous) and is elliptic (since the operator is monotone on non negative matrices and $ f\geq 0 $).
Also observe that the concept of $ \Theta $-subharmonic/superharmonic is equivalent to the definition of subsolution/supersolution that we have given. In fact, if we consider
$ \Phi = \left\{M:\lambda_{i_j}(M)\geq 0\, \mbox{ for every } j = 1, \dots, k\right\} $ |
Then, we have
$ \Theta(x) = \{M\in\Phi: F(M, x)\geq 0\} $ |
where $ F $ is the operator that we are considering here, that is,
$ F(M, x) = \prod\limits_{j = 1}^k \lambda_{i_j}(M)- f(x). $ |
Therefore, the equivalence between being $ \Theta $-subharmonic/superharmonic and being subsolution/supersolution follows from Proposition 2.11 in [8].
To obtain a convergent subsequence $ u^\varepsilon \to u $ we will require $ \Omega $ to be strictly convex. Here this means that we have that for every $ x, y\in\overline\Omega $, $ tx+(1-t)y\in\Omega $ for all $ 0 < t < 1 $. In particular, in Lemma 3.6 we will use a geometric condition over $ \Omega $ equivalent to the strict convexity. We prove the equivalence between these two notions of strict convexity in the following lemma which is a variation of the classical supporting hyperplane theorem.
Lemma 6. Given an open non-empty bounded set $ \Omega\subset\mathbb{R}^N $, the following statements are equivalent:
$ 1) $ $ \Omega $ is strictly convex (i.e., for every $ x, y\in\overline\Omega $, $ tx+(1-t)y\in\Omega $ for all $ 0 < t < 1 $).
$ 2) $ Given $ y\in\partial\Omega $ there exists $ w\in\mathbb{R}^N $ of norm 1 such that $ \langle w, x-y\rangle > 0 $ for every $ x\in\overline\Omega\setminus\{y\} $.
$ 3) $ Given $ y\in\partial\Omega $ there exists $ w\in\mathbb{R}^N $ of norm 1 such that for every $ \delta > 0 $ there exists $ \theta > 0 $ such that
$ \begin{equation*} \{x\in\Omega: \langle w, x-y\rangle < \theta\}\subset B_\delta(y). \end{equation*} $ |
Moreover, the vector $ w $ in statements (2) and (3) is the same one.
Proof. (1) $ \implies $ (2): We consider $ y_k\in\mathbb{R}^N \setminus\overline\Omega $ such that $ y_k\to y $ and $ z_k $ the projection of $ x_k $ over $ \overline\Omega $ (which exists since $ \Omega $ is convex). We define the vectors
$ w_k = \frac{z_k-y_k}{\|z_k-y_k\|}. $ |
Up to a subsequence, we may assume that $ w_k\to w\in\mathbb{R}^N $. We have that
$ \langle z_k-y_k, x-z_k \rangle \geq 0 $ |
for every $ x\in\overline\Omega $. Hence, for every $ x\in\overline\Omega $, we have
$ \langle w_k, x \rangle \geq \langle w_k, z_k \rangle = \langle w_k, z_k-y_k\rangle+ \langle w_k, y_k \rangle > \langle w_k, y_k \rangle. $ |
Passing to the limit we get
$ \langle w, x-y\rangle\geq 0 $ |
for every $ x\in\overline\Omega $.
It remains to prove that $ \langle w, x-y\rangle\neq 0 $ when $ x\neq y $. Suppose, for the sake of contradiction, that $ \langle w, x-y\rangle = 0 $ for some $ x\neq y $. By the strict convexity of $ \Omega $, we have $ \frac{x+y}{2}\in\Omega $. Hence, $ \frac{x+y}{2}-\varepsilon w\in\Omega $ for $ \varepsilon $ small enough and
$ \Big\langle w, \Big(\frac{x+y}{2}-\varepsilon w\Big)-y\Big\rangle = \langle w, \frac{x-y}{2}-\varepsilon w\rangle = -\varepsilon < 0, $ |
a contradiction.
(2) $ \implies $ (3): Given $ \delta $, we consider $ f:\overline\Omega\setminus B_\delta(y)\to \mathbb{R} $ given by
$ f(x) = \langle w, x-y\rangle. $ |
Since $ f $ is continuous and is defined in a compact set it attains its minimum. We consider $ \theta = \min_{\overline\Omega\setminus B_\delta(y)} f $ which is positive since $ \langle w, x-y\rangle > 0 $ for every $ x\in\overline\Omega\setminus\{y\} $. We have that $ \langle w, x-y\rangle\geq \theta $ for every $ x\in \overline\Omega\setminus B_\delta(y) $, and the result follows.
(3) $ \implies $ (2): Given $ x\in\overline\Omega\setminus\{y\} $ we consider $ \delta = \frac{ \operatorname{dist}(x, y)}{2} > 0 $ and $ x_k\in\Omega\setminus B_\delta(x) $ such that $ x_k\to x $. Since $ x_k\not\in B_\delta(y) $, we have $ \langle w, x_k-y\rangle\geq \theta $. Hence $ \langle w, x-y\rangle\geq\theta > 0 $.
(2) $ \implies $ (1): We consider the set
$ C = \bigcap\limits_{y\in\partial\Omega}\{x\in\mathbb{R}^n:\langle w_y, x-y\rangle\geq 0\} $ |
where $ w_y $ stands for the $ w $ given by statement (2) for each $ y\in \partial \Omega $. Since $ C $ is the intersection of convex sets, it is also convex. We want to prove that $ \overline\Omega $ is convex by proving that it is equal to $ C $. It is clear that $ \overline\Omega\subset C $, let us show that if $ z\not\in\overline\Omega $ then $ z\not\in C $. We fix $ x_0\in\Omega $. Given $ z\not\in\overline\Omega $, there exists $ t\in(0, 1) $ such that $ y = tz+(1-t)x_0\in\partial\Omega $. We know that $ \langle w_y, x_0-y\rangle > 0 $ since $ x_0\in\Omega $. Hence $ \langle w_y, z-y\rangle < 0 $ and $ z\not\in C $.
It remains to prove that the convexity is strict. Given $ x, y\in\overline\Omega $, we know that
$ tx+(1-t)y\in\overline\Omega $ |
for all $ 0 < t < 1 $. We want to prove that $ tx+(1-t)y\in\Omega $, that is, $ tx+(1-t)y\not\in\partial\Omega $. Suppose, arguing again by contradiction, that $ z = tx+(1-t)y\in\partial\Omega $ for some $ 0 < t < 1 $. Then, $ \langle w_z, x-z\rangle > 0 $ and $ \langle w_z, y-z\rangle > 0 $, and this implies that $ 0 < \langle w_z, (tx+(1-t)y)-z\rangle = 0 $, which is a contradiction.
The idea that allows us to obtain the product of the eigenvalues relies on the following formula.
Lemma 7. Given $ \beta_1, \beta_2, \dots, \beta_k\geq 0 $, we have
$ \left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}} = \inf\limits_{\alpha_{j} > 0, \prod\limits_{j = 1}^k \alpha_{j} = 1} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j}. $ |
Proof. The inequality
$ \left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}} \leq \inf\limits_{\alpha_{j} > 0, \prod\limits_{j = 1}^k \alpha_{j} = 1} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j}. $ |
follows from the arithmetic-geometric mean inequality.
In the case that all the $ \beta_i $ are strictly positive, the equality is attained for
$ \alpha_i = \frac{\left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}}}{\beta_i}. $ |
In the case that $ \beta_i = 0 $ for some $ i $ we have to show that the infimum is zero. For that, we consider $ \alpha_i = n^{k-1} $ and $ \alpha_j = \frac{1}{n} $, which gives
$ \lim\limits_{n\to\infty} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} = \beta_i = 0 $ |
as desired.
Let us recall that in our setting we have the extra restriction $ \alpha_i < \phi^2(\varepsilon) $. To handle this issue we require the following two lemmas.
Lemma 8. Given $ \beta_1, \beta_2, \dots, \beta_k\geq 0 $, it holds
$ \left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}} = \lim\limits_{\varepsilon\to 0}\inf\limits_{\alpha_j\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j}, $ |
where $ I_\varepsilon^k $ is given by (1.4).
Proof. If $ \beta_1, \beta_2, \dots, \beta_k > 0 $, for $ \varepsilon $ small enough such that
$ \phi^2(\varepsilon) > \frac{\left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}}}{\beta_i} $ |
for all $ i $, we have
$ \inf\limits_{\alpha_j\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} = \left(\beta_1 \beta_2 \dots \beta_k\right)^{\frac{1}{k}}, $ |
and the result follows.
Now we consider the case where $ \beta_i = 0 $ for some $ i = 1, \dots, k $ and $ k > 1 $ (if $ k = 1 $ the result is obvious). We take
$ \alpha_i = \frac{\phi(\varepsilon)}{2} \text{ and } \alpha_j = \left(\frac{2}{\phi(\varepsilon)}\right)^{\frac{1}{k-1}}. $ |
for $ j\neq i $. We have
$ \inf\limits_{\alpha_j\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} \leq \frac{1}{k}\sum\limits_{j\neq i} \left(\frac{2}{\phi(\varepsilon)}\right)^{\frac{1}{k-1}} \beta_{j}. $ |
By taking limit as $ \varepsilon \to 0 $ we conclude the proof.
Lemma 9. Given $ \beta_1, \beta_2, \dots, \beta_k\in \mathbb{R} $ with $ \beta_i < 0 $ for some $ i = 1, \dots, k $, it holds
$ \lim\limits_{\varepsilon\to 0}\inf\limits_{\alpha_j\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} < 0. $ |
Proof. If $ k = 1 $ we have
$ \lim\limits_{\varepsilon\to 0}\inf\limits_{\alpha_j\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \beta_{j} = \beta_{1} < 0. $ |
If $ k > 1 $, we take
$ \alpha_i = \frac{\phi(\varepsilon)}{2} \text{ and } \alpha_j = \left(\frac{2}{\phi(\varepsilon)}\right)^{\frac{1}{k-1}} $ |
and in the limit we get $ -\infty $.
In this section, we describe in detail the two-player zero-sum game presented in the introduction. Let $ \Omega \subset\mathbb{R}^N $ be a bounded open set and fix $ \varepsilon > 0 $. The values $ k $ and $ (i_j)_{j = 1, \dots, k} $ are given along with a positive function $ \phi(\varepsilon) $ such that
$ \lim\limits_{\varepsilon\to 0} \phi(\varepsilon) = \infty \qquad {\rm{and}} \qquad \lim\limits_{\varepsilon\to0} \varepsilon\, \phi(\varepsilon) = 0. $ |
A token is placed at $ x_0\in\Omega $ and the game begins with Player Ⅰ choosing $ (\alpha_{j})_{j = 1, \dots, k}\in I_\varepsilon^k $ where
$ I_\varepsilon^k = \Big\{(\alpha_{j})_{j = 1, \dots, k}\in\mathbb{R}^k: \prod\limits_{j = 1}^k \alpha_{j} = 1 \quad \text{ and }\quad 0 < \alpha_{j} < \phi^2(\varepsilon) \Big\}. $ |
Then, $ j\in\{1, \dots, k\} $ is selected uniformly at random. Given the value of $ j $, Player Ⅰ chooses a subspace $ S $ of dimension $ i_j $ and then Player Ⅱ chooses one unitary vector $ v\in S $. Then, the token is moved to $ x\pm \varepsilon \sqrt{\alpha_{j}} v $ with equal probabilities. After the first round, the game continues from $ x_1 $ according to the same rules.
This procedure yields a possibly infinite sequence of game states $ x_0, x_1, \ldots $ where every $ x_k $ is a random variable. The game ends when the token leaves $ \Omega $. At this point the token will be in the boundary strip of width $ \varepsilon\phi(\varepsilon) $ given by
$ \begin{split}\Gamma_{\varepsilon\phi(\varepsilon)} = \Big\{x\in {\mathbb{R}}^N \setminus \Omega \, :\, \operatorname{dist}(x, \partial \Omega )\leq \varepsilon\phi(\varepsilon)\Big\}. \end{split} $ |
We denote by $ x_\tau \in \Gamma_{\varepsilon\phi(\varepsilon)} $ the first point in the sequence of game states that lies in $ \Gamma_{\varepsilon\phi(\varepsilon)} $. In other words, $ \tau $ is the first time we hit $ \Gamma_{\varepsilon\phi(\varepsilon)} $ ($ \tau $ is a stopping time for this game). The payoff is determined by two given functions: $ g:\mathbb{R}^N\setminus \Omega \to \mathbb{R} $, the final payoff function, and $ f:\Omega \to \mathbb{R} $, the running payoff function. We require $ g $ to be continuous, $ f $ uniformly continuous and both bounded functions. When the game ends, the total payoff is given by
$ g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{k}}. $ |
Player Ⅱ earns this amount and Player Ⅰ loses it (Player Ⅰ earns $ -g (x_\tau) +\frac12 \varepsilon^2 \sum_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{k}} $).
A strategy $ S_ {\rm{I}} $ for Player Ⅰ is a function defined on the partial histories that gives the values of $ \alpha_{j} $ at every step of the game
$ S_ {\rm{I}}{\left(x_0, x_1, \ldots, x_n\right)} = (\alpha_{j})_{j = 1, \dots, k}\in I_\varepsilon^k $ |
and that for a given value of $ j $ returns a $ i_j- $dimensional subspace $ S $
$ S_ {\rm{I}}{\left(x_0, x_1, \ldots, x_n, j\right)} = S. $ |
We call $ S_ {\rm{I}} $ both functions to avoid overloading the notation. A strategy $ S_ {\rm{II}} $ for Player Ⅱ is a function defined on the partial histories that gives a unitary vector in a prescribed subspace $ S $ at every step of the game
$ S_ {\rm{II}}{\left(x_0, x_1, \ldots, x_n, S, \alpha_{j}\right)} = v\in S. $ |
When the two players fix their strategies $ S_I $ and $ S_{II} $ we can compute the expected outcome as follows: Given the sequence $ x_0, \ldots, x_n $ with $ x_j\in\Omega $ the next game position is distributed according to the probability
$ \pi_{S_ {\rm{I}}, S_ {\rm{II}}}(x_0, \ldots, x_n, {A}) = \frac{1}{2k} \sum\limits_{j = 1}^k \delta_{x_n+\varepsilon \sqrt{\alpha_{j}} v}(A)+ \delta_{x_n-\varepsilon \sqrt{\alpha_{j}} v}(A). $ |
where
$ (\alpha_{j})_{j = 1, \dots, k} = S_ {\rm{I}}{\left(x_0, x_1, \ldots, x_n\right)} $ |
and
$ v = S_ {\rm{II}}(x_0, \ldots, x_n, S_ {\rm{I}}(x_0, \ldots, x_n, j), \alpha_{j}). $ |
By using the Kolmogorov's extension theorem and the one-step transition probabilities, we can build a probability measure $ \mathbb{P}^{x_0}_{S_ {\rm{I}}, S_ {\rm{II}}} $ on the game sequences $ H^\infty $. We denote by $ \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0} $ the corresponding expectation. Then, when starting from $ x_0 $ and using the strategies $ S_ {\rm{I}}, S_ {\rm{II}} $, the expected payoff is
$ \begin{equation} \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{k}}\right]. \end{equation} $ | (3.1) |
The value of the game for Player I is given by
$ u^\varepsilon_ {\rm{I}}(x_0) = \inf\limits_{S_ {\rm{I}}}\sup\limits_{S_{ {\rm{II}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{k}}\right], $ |
while the value of the game for Player II is given by
$ u^\varepsilon_ {\rm{II}}(x_0) = \sup\limits_{S_{ {\rm{II}}}}\inf\limits_{S_ {\rm{I}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [ f (x_l)]^{\frac{1}{k}}\right]. $ |
Intuitively, the values $ u_ {\rm{I}}^\varepsilon(x_0) $ and $ u_ {\rm{II}}^\varepsilon(x_0) $ are the best expected outcomes each player can guarantee when the game starts at $ x_0 $. If $ u^\varepsilon_ {\rm{I}} = u^\varepsilon_ {\rm{II}} $, we say that the game has a value and we denote it by $ u^\varepsilon: = u^\varepsilon_ {\rm{I}} = u^\varepsilon_ {\rm{II}} $.
Let us observe that the game ends almost surely, and therefore the expectation (3.1) is well defined. Let us be more precise at this point. If we consider the square of the distance to $ x_0 $, at every step, this quantity increases by at least $ \varepsilon^2 $ with probability $ \frac{1}{2k} $ (a value of $ j $ such that $ \alpha_{j}\geq 1 $ is selected with probability at least $ \frac{1}{k} $ and given a direction $ v $ at least one of the vectors $ x_n\pm\varepsilon \sqrt{\alpha_{j}}v $ is at least at a distance $ \varepsilon^2 $ greater that the distance from $ x_n $ to the initial point). As the distance to $ x_0 $ is bounded (since we assumed that $ \Omega $ is bounded), with a positive probability the game ends after a finite number of steps. Iterating this argument we get that the game ends almost surely. See Lemma 13 for details concerning this argument.
To see that the game has a value, we first observe that we have existence of $ u^\varepsilon $, a function that satisfies the DPP. The existence of such a function can be seen by Perron's method. In fact, the operator given by the right-hand side of the DPP, that is,
$ u \mapsto \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} u (x + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2} u (x - \varepsilon \sqrt{\alpha_{j}} v) \right\} -\frac12\varepsilon^2 [f(x)]^{\frac{1}{k}} , $ |
is in the hypotheses of the main result of [17].
Now, concerning the value functions of our game, we know that $ u^\varepsilon_ {\rm{I}}\geq u^\varepsilon_ {\rm{II}} $ (this is immediate from their definition). Hence, to obtain the desired result, it is enough to show that $ u^\varepsilon\geq u^\varepsilon_ {\rm{I}} $ and $ u^\varepsilon_ {\rm{II}} \geq u^\varepsilon $.
Given $ \eta > 0 $ we can consider the strategy $ S_ {\rm{II}}^0 $ for Player Ⅱ that at every step almost maximize $ u (x + \varepsilon \sqrt{\alpha_{j}} v) + u (x - \varepsilon \sqrt{\alpha_{j}} v) $, that is
$ S_ {\rm{II}}^0{\left(x_0, x_1, \ldots, x_n, S, \alpha_{j}\right)} = w\in S $ |
such that
$ \begin{split} \left\{ \frac{1}{2} u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} w) + \frac{1}{2} u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} w)\right\} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\ \geq \sup\limits_{ v\in S, |v| = 1} \left\{ \frac{1}{2} u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2} u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} v) \right\} -\eta 2^{-(k+1)} . \end{split} $ |
We have
$ \begin{split} &\mathbb{E}_{S_ {\rm{I}}, S^0_ {\rm{II}}}^{x_0}[u^\varepsilon(x_{k+1})+\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n)-\eta 2^{-(k+1)}|\, x_0, \ldots, x_k] \\ &\geq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} u (x + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2} u (x - \varepsilon \sqrt{\alpha_{j}} v) \right\} \\ &\qquad\qquad -\eta 2^{-(k+1)}+\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n)-\eta 2^{-(k+1)} \\ &\geq u^\varepsilon(x_k)-\frac12\varepsilon^2f(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n)-\eta 2^{-k} \\ &\geq u^\varepsilon(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n)-\eta 2^{-k}, \end{split} $ |
where we have estimated the strategy of Player Ⅰ by $ \inf $ and used the DPP. Then,
$ M_k = u^\varepsilon(x_k)+\frac12 \varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n)-\eta2^{-k} $ |
is a submartingale.
Now, we have
$ \begin{equation*} \begin{split} u^\varepsilon_ {\rm{II}}(x_0) & = \sup\limits_{S_ {\rm{II}}}\inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)\right]\\ &\geq\inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S^0_ {\rm{II}}}^{x_0}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)-\eta 2^{-\tau}\right]\\ &\geq \inf\limits_{S_ {\rm{I}}} \liminf\limits_{k\to\infty}\mathbb{E}_{S_{ {\rm{I}}}, S^0_ {\rm{II}}}^{x_0}[M_{\tau\wedge k}]\\ &\geq \inf\limits_{S_ {\rm{I}}}\mathbb{E}_{S_{ {\rm{I}}}, S^0_ {\rm{II}}}^{x_0}[M_0] = u^\varepsilon(x_0)-\eta, \end{split} \end{equation*} $ |
where $ \tau\wedge k = \min(\tau, k) $, and we used the optional stopping theorem for $ M_{k} $. Since $ \eta $ is arbitrary this proves that $ u^\varepsilon_ {\rm{II}} \geq u^\varepsilon $. An analogous strategy can be considered for Player Ⅰ to prove that $ u^\varepsilon\geq u^\varepsilon_ {\rm{I}} $.
Given a solution to the DPP we have proved that it coincides with the game value. Then, the game value satisfies the DPP and, even more, any solution coincides with it. Uniqueness follows. We have proved Theorem 1.
Remark 10. From our argument it can be deduced that
$ u^\varepsilon(x_0) = \sup\limits_{S_ {\rm{II}}}\inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[u^\varepsilon(x_{\tilde\tau})+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tilde\tau-1}f(x_n)\right] $ |
for any stopping time $ \tilde\tau\leq \tau $. That is, as long as the game has not ended we can separate the payoff as the cumulative amount payed during the game, $ \frac12\varepsilon^2\sum_{n = 0}^{\tau-1}f(x_n) $ and the expected one for the rest of the game, $ u^\varepsilon(x_{\tilde\tau}) $.
Remark 11. We have a comparison principle for solutions to the DPP. Assume that $ f_1 \geq f_2 $ and that $ g_1 \leq g_2 $ then the corresponding solutions verify
$ u_1^\varepsilon \leq u_2^\varepsilon $ |
in $ \Omega $. In terms of the game this is quite intuitive since playing with $ g_1 $ and $ f_1 $ the palyers have a larger final payoff and a smaller running playoff than playing with $ g_2 $ and $ f_2 $.
Now our aim is to pass to the limit in the values of the game. We aim to prove that, along a subsequence,
$ u^\varepsilon \to u, \qquad \mbox{as } \varepsilon \to 0 $ |
uniformly in $ \overline{\Omega} $ and then obtain in this limit process a viscosity solution to (1.1).
To obtain a convergent subsequence $ u^\varepsilon \to u $ we will use the following Arzela-Ascoli type lemma. For its proof see Lemma 4.2 from [21].
Lemma 12. Let $ \{u^\varepsilon : \overline{\Omega} \to \mathbb{R}, \ \varepsilon > 0\} $ be a set of functions such that
1) there exists $ C > 0 $ such that $ \left| {u^\varepsilon (x)} \right| < C $ for every $ \varepsilon > 0 $ and every $ x \in \overline{\Omega} $,
2) given $ \eta > 0 $ there are constants $ r_0 $ and $ \varepsilon_0 $ such that for every $ \varepsilon < \varepsilon_0 $ and any $ x, y \in \overline{\Omega} $ with $ |x - y | < r_0 $ it holds
$ |u^\varepsilon (x) - u^\varepsilon (y)| < \eta. $ |
Then, there exists a uniformly continuous function $ u: \overline{\Omega} \to \mathbb{R} $ and a subsequence still denoted by $ \{u^\varepsilon \} $ such that
$ \begin{split} u^{\varepsilon}\to u \qquad \;{ uniformly\; in}\quad\overline{\Omega}, \end{split} $ |
as $ \varepsilon\to 0 $.
Our task now is to show that the family $ u^\varepsilon $ satisfies the hypotheses of the previous lemma.
Lemma 13. There exists $ C = C(\Omega) > 0 $ such that
$ \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0} [\tau]\leq C \varepsilon^{-2}. $ |
for every $ \varepsilon > 0 $, $ S_I $, $ S_{II} $ and $ x_0 \in \Omega $.
Proof. Here we write $ {\mathbb E} $ for $ \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0} $. We consider $ R > 0 $ such that $ \Omega\subset B_R(0) $ and $ M_n = \|x_n\|^2 $. Given that the token lies at $ x_n $, we have that
$ \begin{split} {\mathbb E}[M_{n+1}|x_n]& = \frac{1}{2k}\sum\limits_{j = 1}^k \|x_n+\varepsilon\sqrt{\alpha_{j}v_j}\|^2+\|x_n-\varepsilon\sqrt{\alpha_{j}v_j}\|^2\\ & = \frac{1}{k}\sum\limits_{j = 1}^k \|x_n\|^2+\varepsilon^2\alpha_{j}\|v_j\|^2\\ & = \|x_n\|^2+\varepsilon^2\frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j}\geq \|x_n\|^2+\varepsilon^2\prod\limits_{j = 1}^k \alpha_{j}\\ & = \|x_n\|^2+\varepsilon^2 = M_n+\varepsilon^2, \end{split} $ |
where $ v_j $ stands for the selected unitary vector when that value of $ j $ is chosen.
We have obtained
$ {\mathbb E}[M_{n+1}|M_n]\geq M_n+\varepsilon^2. $ |
Hence,
$ M_n-n\varepsilon^2 $ |
is a submartingale. According to the optional stopping theorem for submartingales
$ {\mathbb E} \left[M_{\tau \wedge n }-(\tau \wedge n) \varepsilon^2\right]\geq M_0. $ |
Therefore
$ {\mathbb E} [\tau \wedge n]\varepsilon^2\leq E[M_{\tau \wedge n}]-M_0\leq E[M_{\tau\wedge n}] \leq R^2. $ |
By taking limit in $ n $, we obtain a bound for the expected exit time,
$ {\mathbb E} [\tau]\leq R^2 \varepsilon^{-2}, $ |
as desired.
Corollary 14. There exists $ C = C(\Omega, f, g) > 0 $ such that $ \left| {u^\varepsilon (x)} \right| < C $ for every $ \varepsilon > 0 $ and every $ x \in \overline{\Omega} $.
Proof. Since $ \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0} [\tau]\leq C \varepsilon^{-2} $, we have
$ \begin{split} |u^\varepsilon(x_0)| &\leq\sup\limits_{S_ {\rm{II}}}\inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0}\left[|g(x_\tau)|+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}|f(x_n)|\right]\\ & \leq \max g+C \max f. \end{split} $ |
To prove that $ u^\varepsilon $ satisfies the second hypothesis we will make the following geometric assumption on the domain. Given $ y\in\partial\Omega $ we assume that for every $ \delta > 0 $ there exists $ w\in\mathbb{R}^N $ of norm 1 and $ \theta > 0 $ such that
$ \begin{equation} \{x\in\Omega: \langle w, x-y\rangle < \theta\}\subset B_\delta(y). \end{equation} $ | (3.2) |
This condition is equivalent to $ \Omega $ being strictly convex as proved in Section 2.
Lemma 15. Given $ \eta > 0 $ there are constants $ r_0 $ and $ \varepsilon_0 $ such that for every $ \varepsilon < \varepsilon_0 $ and any $ x, y \in \overline{\Omega} $ with $ |x - y | < r_0 $ it holds
$ |u^\varepsilon (x) - u^\varepsilon (y)| < \eta. $ |
Proof. The case $ x, y \in \Gamma_{\varepsilon\phi(\varepsilon)} $ follows from the uniform continuity of $ g $ in $ \Gamma_{\varepsilon\phi(\varepsilon)} $. Since the rules of the game do not depend on the point, the case $ x, y \in \Omega $ follows from the case $ x\in \Omega $ and $ y \in \Gamma_{\varepsilon\phi(\varepsilon)} $. The argument is as follows. Suppose that we want to prove that $ u^\varepsilon (x) - u^\varepsilon (y) < \eta $, that is
$ \sup\limits_{S_ {\rm{II}}}\inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)\right] -\sup\limits_{\tilde S_ {\rm{II}}}\inf\limits_{\tilde S_{ {\rm{I}}}}\, \mathbb{E}_{\tilde S_{ {\rm{I}}}, \tilde S_ {\rm{II}}}^{y}\left[g(y_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(y_n)\right] < \eta. $ |
Then, it is enough to show that given $ S_ {\rm{II}}^0 $ and $ \tilde S_{ {\rm{I}}}^0 $ (strategies for Player Ⅱ in the game starting at $ x $ and for Player Ⅰ in the game starting at $ y $, respectively) there exists $ \tilde S_ {\rm{II}}^0 $ and $ S_{ {\rm{I}}}^0 $ such that
$ \mathbb{E}_{S_{ {\rm{I}}}^0, S_ {\rm{II}}^0}^{x}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)\right] -\mathbb{E}_{\tilde S_{ {\rm{I}}}^0, \tilde S_ {\rm{II}}^0}^{y}\left[g(y_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(y_n)\right] < \eta. $ |
We consider the strategies $ \tilde S_ {\rm{II}}^0 $ and $ S_{ {\rm{I}}}^0 $ that mimic $ S_ {\rm{II}}^0 $ and $ \tilde S_{ {\rm{I}}}^0 $, that is
$ \tilde S_ {\rm{II}}^0 (y, y_1, \dots, y_n) = S_ {\rm{II}}^0(x, y_1-y+x, \dots, y_n-y+x) $ |
and
$ S_ {\rm{I}}^0(x, x_1, \dots, x_n) = \tilde S_ {\rm{II}}^0 (y, x_1-x+y, \dots, x_n-x+y). $ |
Even more, we couple the random steps, then we have that when the token lies at $ x_n $, in the other games it lies at $ y_n = x_n-x+y $. We call $ {\mathbb E} $ the common expectation of the coupled processes. We proceed in this way until one of the games ends, at time $ \tilde \tau $, that is for the first time $ x_n \in \Gamma_{\varepsilon\phi(\varepsilon)} $ or $ y_n \in \Gamma_{\varepsilon\phi(\varepsilon)} $. By Remark 10 it is enough to show that
$ \mathbb{E}\left[u^\varepsilon(x_{\tilde\tau})-u^\varepsilon(y_{\tilde\tau})+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tilde\tau-1}(f(x_n)-f(y_n))\right] < \eta. $ |
At every point we have $ |x_l-y_l| = |x-y| $, and the desired estimate follow from the one for $ x_n \in \Omega $, $ y_n\in \Gamma_{\varepsilon\phi(\varepsilon)} $ or for $ x_n, y_n \in \Gamma_{\varepsilon\phi(\varepsilon)} $ together with the uniform continuity of $ f $ and the bound for the exit time obtained in Lemma 13.
Now, we can concentrate on the case $ x\in \Omega $ and $ y \in \Gamma_{\varepsilon\phi(\varepsilon)} $. Due to the uniform continuity of $ g $ in $ \Gamma_{\varepsilon\phi(\varepsilon)} $, we can assume that $ y\in\partial\Omega $. In fact, if we have the bound valid for points on the boundary we can obtain a bound for a generic point $ y \in \Gamma_{\varepsilon\phi(\varepsilon)} $ just by considering $ z\in\partial\Omega $ in the line segment between $ x $ and $ y $ and using the triangular inequality.
In this case we have
$ u^\varepsilon (y) = g(y), $ |
and we need to obtain a bound for $ u^\varepsilon (x) $. We observe that, for any possible strategy of the players (for any possible choice of the direction $ v $ at every point) we have that the projection of $ x_n $ in the direction of the a fixed vector $ w $ of norm 1,
$ \langle x_n-y, w\rangle $ |
is a martingale. We fix an arbitrary pair of strategies $ S_I $ and $ S_{II} $, we denote by $ \mathbb{P} = \mathbb{P}^x_{S_I, S_{II}} $ the corresponding probability playing with these strategies and $ {\mathbb E} = {\mathbb E}^x_{S_I, S_{II}} $ the corresponding expectation. We take $ \delta > 0 $ and consider $ x_\tau $, the first time $ x $ leaves $ \Omega $ or $ B_\delta(y) $. Hence, we have
$ \mathbb{E} \langle x_\tau-y, w \rangle \leq \langle x-y, w \rangle \leq d(x, y) < r_0. $ |
From the geometric assumption on $ \Omega $, by choosing $ w $ as in Lemma 6, we have that
$ \langle x_n-y, w\rangle \geq - \phi(\varepsilon)\varepsilon $ |
because at every step the token moves at most $ \sqrt{\alpha_i}\varepsilon\leq \phi(\varepsilon)\varepsilon $. Therefore, we obtain
$ \mathbb{P} \left( \langle x_\tau-y, w \rangle > r_0^{1/2} \right) r_0^{1/2} - \left(1-\mathbb{P} \left( \langle x_\tau-y, w \rangle > r_0^{1/2} \right) \right) \phi(\varepsilon)\varepsilon < r_0. $ |
Then, we take $ \varepsilon_0 > 0 $ such that $ \phi(\varepsilon)\varepsilon\leq r_0 $ for every $ \varepsilon < \varepsilon_0 $ and we get
$ \mathbb{P} \left( \langle x_\tau-y, w \rangle > r_0^{1/2} \right) < 2 r_0^{1/2}. $ |
We consider the corresponding $ \theta $ such that (3.2) holds, this implies
$ \{ \operatorname{dist}(x_\tau, y) > \delta\}\subset\{\langle x_\tau-y, w \rangle > \theta\} $ |
and hence, for $ r_0^{1/2} < \theta $, we can conclude that
$ \mathbb{P} ( d ( x_\tau, y) > \delta ) < 2 r_0^{1/2}. $ |
Repeating the argument in Lemma 13 for $ M_n = \|x_n-y\|^2 $ we can show that
$ {\mathbb E}[\tau]\leq \delta^2 \varepsilon^{-2}. $ |
Therefore,
$ \begin{array}{l} |u_\varepsilon (x) - g(y)| \\[10pt] \leq {\mathbb E}\big[|g (x_\tau) - g(y)|\big|d(x_\tau, y)\leq \delta\big]\mathbb{P} (d(x_\tau, y)\leq \delta ) + \mathbb{P} (d(x_\tau, y) > \delta ) (2 \max g+\delta^2\max f) \\[10pt] \leq \sup\limits_{x\in B_\delta(y)}|g (x) - g(y)| +2r_0^{1/2} (2 \max g+\delta^2\max f) < \eta \end{array} $ |
if $ r_0 $ and $ \delta $ are small enough.
From Corollary 14 and Lemma 15 we have that the hypotheses of the Arzela-Ascoli type lemma, Lemma 12, are satisfied. Hence we have obtained uniform convergence of $ u^\varepsilon $ along a subsequence.
Corollary 16. Let $ u^\varepsilon $ be the values of the game. Then, along a subsequence,
$ \begin{equation*} \label{eq.converge.22} u^\varepsilon \to u, \qquad {{as}} \; \varepsilon \to 0, \end{equation*} $ |
uniformly in $ \overline{\Omega} $.
Now, we prove that the uniform limit of $ u^\varepsilon $ is a viscosity solution to the limit PDE problem.
Theorem 17. The uniform limit of the game values $ u^\varepsilon $, denoted by $ u $, is a viscosity solution to
$ \begin{equation} \left\{ \begin{array}{ll} \prod\limits_{j = 1}^k \lambda_{i_j} (D^2u) = f , \qquad & {{in}} \;\Omega, \\[10pt] u = g , \qquad & {{on}}\; \partial \Omega. \end{array} \right. \end{equation} $ | (3.3) |
Proof. First, we observe that since $ u^\varepsilon = g $ on $ \partial \Omega $ it is immediate, form the uniform convergence, that $ u = g $ on $ \partial \Omega $. Also, notice that Lemma 12 gives that a uniform limit of $ u^\varepsilon $ is a continuous function.
To check that $ u $ is a viscosity supersolution to $ \prod_{j = 1}^k \lambda_{i_j} (D^2u) = f $ in $ \Omega $, let $ \varphi $ be a paraboloid that touches $ u $ from below at $ x\in\Omega $, and with eigenvalues of the Hessian that verify $ \lambda_{i_j} (D^2\varphi (x)) > 0 $. We need to check that
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \varphi (x)) - f (x) \leq 0. $ |
As $ u^\varepsilon \to u $ uniformly in $ \overline{\Omega} $ we have the existence of a sequence $ x_\varepsilon $ such that $ x_\varepsilon \to x $ as $ \varepsilon \to 0 $ and
$ u^\varepsilon (z) - \varphi (z) \geq u^\varepsilon (x_\varepsilon) - \varphi (x_\varepsilon) - \varepsilon^3 $ |
(notice that $ u^\varepsilon $ is not continuous in general). As $ u^\varepsilon $ is a solution to (1.6),
$ \begin{equation*} u^\varepsilon (x) = \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} u^\varepsilon (x + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2} u^\varepsilon (x - \varepsilon \sqrt{\alpha_{j}} v) \right\} -\frac{\varepsilon^2}{2} [f(x)]^{\frac{1}{k}} \end{equation*} $ |
we obtain
$ \begin{array}{l} 0\geq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} \varphi (x_\varepsilon + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2}\varphi (x_\varepsilon - \varepsilon \sqrt{\alpha_{j}} v) - \varphi (x_\varepsilon) \right\} \\[5pt] \qquad\qquad\qquad\qquad -\frac{\varepsilon^2}{2}[f(x_\varepsilon)]^{\frac{1}{k}} - \varepsilon^3 . \end{array} $ |
Now, using the second-order Taylor expansion of $ \varphi $,
$ \varphi(y) = \varphi(x)+\nabla\varphi(x)\cdot(y-x) +\frac12\langle D^2\varphi(x)(y-x), (y-x)\rangle $ |
we get
$ \begin{equation} \varphi(x_\varepsilon +\varepsilon \sqrt{\alpha_{j}} v) = \varphi(x_\varepsilon )+\varepsilon \sqrt{\alpha_{j}} \nabla\varphi(x_\varepsilon )\cdot v +\varepsilon^2 \alpha_{j} \frac12\langle D^2\varphi(x_\varepsilon )v, v\rangle \end{equation} $ | (3.4) |
and
$ \begin{equation} \varphi(x_\varepsilon - \varepsilon \sqrt{\alpha_{j}} v) = \varphi(x_\varepsilon ) - \varepsilon \sqrt{\alpha_{j}} \nabla\varphi(x_\varepsilon )\cdot v +\varepsilon^2 \alpha_{j} \frac12\langle D^2\varphi(x_\varepsilon )v, v\rangle. \end{equation} $ | (3.5) |
Therefore,
$ 0\geq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \frac{\varepsilon^2}{2} \alpha_{j} \langle D^2 \varphi (x_\varepsilon)v, v \rangle -\frac12\varepsilon^2 [f(x_\varepsilon)]^{\frac{1}{k}} - \varepsilon^3 . $ |
Dividing by $ \frac{\varepsilon^2}{2} $ we get
$ [f(x_\varepsilon)]^{\frac{1}{k}} \geq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \langle D^2 \varphi (x_\varepsilon)v, v \rangle - 2\varepsilon . $ |
We have
$ \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \langle D^2 \varphi (x_\varepsilon)v, v \rangle = \lambda_j (D^2 \varphi (x_\varepsilon)) $ |
by the Courant-Fischer min-max principle. Even more, since $ D^2 \varphi (x_\varepsilon) = D^2 \varphi (x) $, we have $ \lambda_j (D^2 \varphi (x_\varepsilon)) = \lambda_j (D^2 \varphi (x)) $. Hence, we conclude that
$ [f(x_\varepsilon)]^{\frac{1}{k}} \geq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \lambda_{j} (D^2 \varphi (x)) +o(1) . $ |
Using Lemma 8 to pass to the limit as $ \varepsilon \to 0 $ we get
$ [f(x)]^{\frac{1}{k}}\geq \left( \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \varphi (x)) \right)^{\frac1k}, $ |
that is,
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \varphi (x)) \leq f (x) . $ |
Now, to check that $ u $ is a viscosity subsolution to $ \prod_{j = 1}^k \lambda_{i_j} (D^2u) = f $ in $ \Omega $, let $ \psi $ be a paraboloid that touches $ u $ from above at $ x\in\Omega $. We want to see that $ \lambda_{i_j} (D^2 \psi (x))\geq 0 $ for every $ j = 1, \dots, k $ and
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \psi (x)) - f (x) \geq 0. $ |
As $ u^\varepsilon \to u $ uniformly in $ \overline{\Omega} $ we have the existence of a sequence $ x_\varepsilon $ such that $ x_\varepsilon \to x $ as $ \varepsilon \to 0 $ and
$ u^\varepsilon (z) - \psi (z) \geq u^\varepsilon (x_\varepsilon) - \psi (x_\varepsilon) - \varepsilon^3 $ |
(notice that $ u^\varepsilon $ is not continuous in general). Arguing as before we obtain
$ \begin{array}{l} 0\leq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \left\{ \frac{1}{2} \psi (x_\varepsilon + \varepsilon \sqrt{\alpha_{j}} v) + \frac{1}{2}\psi (x_\varepsilon - \varepsilon \sqrt{\alpha_{j}} v) - \psi (x_\varepsilon) \right\} \\[14pt] \qquad\qquad -\frac12\varepsilon^2 [f(x_\varepsilon)]^{\frac{1}{k}} - \varepsilon^3 . \end{array} $ |
Using a Taylor expansions we arrive to
$ \begin{split} \frac12\varepsilon^2 [f(x_\varepsilon)]^{\frac{1}{k}} &\leq \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \inf\limits_{ \text{dim}(S) = i_j} \sup\limits_{v\in S, |v| = 1} \frac{\varepsilon^2}{2} \alpha_{j} \langle D^2 \psi (x_\varepsilon)v, v \rangle - \varepsilon^3\\ & = \frac{\varepsilon^2}{2} \inf\limits_{\alpha_{j}\in I_\varepsilon^k} \frac{1}{k}\sum\limits_{j = 1}^k \alpha_{j} \lambda_{i_j}(D^2 \psi (x)) - \varepsilon^3 . \end{split} $ |
Since $ f\geq 0 $ by Lemma 9 we get that $ \lambda_{i_j}(D^2 \psi (x))\geq 0 $. Dividing by $ \frac{\varepsilon^2}{2} $ and using Lemma 8 as before to pass to the limit as $ \varepsilon \to 0 $ we get
$ [f(x)]^{\frac{1}{k}}\leq \left( \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \psi (x)) \right)^{\frac1k}, $ |
that is,
$ \prod\limits_{j = 1}^k \lambda_{i_j} (D^2 \psi (x)) \geq f (x) . $ |
This concludes the proof.
Finally, since problem (1.1) has a unique solution, see Remark 5, Theorem 2 follows.
In this section we describe a one-player/control problem presented in the introduction in order to approximate solutions to the Monge-Ampère equation. As before, let $ \Omega \subset\mathbb{R}^N $ be a bounded open set and fix $ \varepsilon > 0 $. Also take a positive function $ \phi(\varepsilon) $ such that
$ \lim\limits_{\varepsilon\to 0} \phi(\varepsilon) = \infty \qquad {\rm{and}} \qquad \lim\limits_{\varepsilon\to0} \varepsilon\, \phi(\varepsilon) = 0. $ |
A token is placed at $ x_0\in\Omega $ and Player Ⅰ (the controller) chooses coefficients $ (\alpha_{i})_{i = 1, \dots, N}\in I_\varepsilon^N $ where
$ I_\varepsilon^N = \Big\{(\alpha_{i})_{i = 1, \dots, N}\in\mathbb{R}^N: \prod\limits_{i = 1}^N \alpha_{i} = 1 \quad \text{ and }\quad 0 < \alpha_{i} < \phi^2(\varepsilon) \Big\}. $ |
The player/controller also chooses an orthonormal basis of $ \mathbb{R}^N $, $ V = (v_1, ..., v_N) $. Then, the token is moved to $ x\pm \varepsilon \sqrt{\alpha_{i}} v_i $ with equal probabilities. After the first round, the game continues from the new position $ x_1 $ according to the same rules.
As before, the game ends when the token leaves $ \Omega $. We denote by $ x_\tau $ the first point in the sequence of game states that lies outside $ \Omega $, so that $ \tau $ is a stopping time for this game. The payoff is determined by two given functions: $ g:\mathbb{R}^N\setminus \Omega \to \mathbb{R} $, the final payoff function, and $ f:\Omega \to \mathbb{R} $, the running payoff function. We require $ g $ to be continuous, $ f $ uniformly continuous and both bounded functions. When the game ends, the total payoff is given by
$ g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{N}}. $ |
When the player/controller fixes a strategy $ S_I $ the expected payoff is given by
$ \begin{equation} \mathbb{E}_{S_{ {\rm{I}}}}^{x_0}\left[g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{N}}\right]. \end{equation} $ | (4.1) |
The player/controller aims to minimize the expected payoff, hence the value of the game is defined as
$ u_ {\rm{I}}^\varepsilon (x_0) = \inf\limits_{S_ {\rm{I}}}\, \mathbb{E}_{S_{ {\rm{I}}}}^{x_0}\left[g (x_\tau) -\frac12 \varepsilon^2 \sum\limits_{l = 0}^{\tau -1} [f (x_l)]^{\frac{1}{N}}\right]. $ |
Let us observe that the game ends almost surely, and therefore the expectation (4.1) is well defined. We can proceed as before to prove that the value of the game coincides with the solution to
$ \begin{equation} \left\{ \begin{array}{l} u^\varepsilon (x) = \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u^\varepsilon (x - \varepsilon \sqrt{\alpha_i} v_i) \right\} \\[10pt] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \quad -\frac12\varepsilon^2 [f(x)]^{\frac{1}{N}} \qquad x \in \Omega, \\[10pt] u^\varepsilon (x) = g(x) \qquad x \not\in \Omega. \end{array} \right. \end{equation} $ | (4.2) |
We first observe that we have existence of solutions to (4.2) and we can apply the main result of [17]. Now, given $ \eta > 0 $ we can consider the strategy $ S_ {\rm{I}}^* $ for the player that at every step of the game $ x_k $ almost realizes the infimum, that is, the player chooses coefficients $ \hat{\alpha_i} $ and an orthonormal basis $ (\hat{v}_i) $ such that
$ \begin{array}{l} \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x_k + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u^\varepsilon (x_k - \varepsilon \sqrt{\alpha_i} v_i) \right\} \\[10pt] \geq \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x_k + \varepsilon \sqrt{\hat{\alpha_i}} \hat{v}_i) + \frac12 u^\varepsilon (x_k - \varepsilon \sqrt{\hat{\alpha_i}} \hat{v}_i) - \frac{\eta}{2^{k+1}}. \end{array} $ |
Playing with this strategy we have
$ \begin{split} &\mathbb{E}_{S^*_ {\rm{I}}}^{x_0}[u^\varepsilon(x_{k+1})+ \frac12 \varepsilon^2\sum\limits_{n = 0}^{k}f(x_n)- \frac{\eta}{2^{k+1}}|\, x_0, \ldots, x_k] \\ &\leq \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x_k + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u^\varepsilon (x_k - \varepsilon \sqrt{\alpha_i} v_i) \right\} \\ &\qquad\qquad - \frac{\eta}{2^{k+1}} +\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n) - \frac{\eta}{2^{k+1}} \\ & = u^\varepsilon(x_k)-\frac12\varepsilon^2f(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n) - \frac{\eta}{2^{k}} \\ &\leq u^\varepsilon(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n) - \frac{\eta}{2^{k}} . \end{split} $ |
Then,
$ M_k = u^\varepsilon(x_k)+\frac12 \varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n) - \frac{\eta}{2^{k}} $ |
is a supermartingale.
From this fact, arguing as before, we get
$ \begin{equation} \begin{split} u^\varepsilon_ {\rm{I}}(x_0) & = \inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}}^{x_0}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)\right]\\ &\leq \mathbb{E}_{S^*_{ {\rm{I}}}}^{x_0}\left[g(x_\tau)+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}f(x_n)\right]\\ & \leq u^\varepsilon(x_0)-\eta. \end{split} \end{equation} $ | (4.3) |
Since $ \eta $ is arbitrary we obtain that
$ u_ {\rm{I}}^\varepsilon (x) \leq u^\varepsilon (x). $ |
To obtain the reverse inequality, we fix an arbitrary strategy $ S_I $ for the player and we observe that
$ M_k = u^\varepsilon(x_k)+\frac12 \varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n) $ |
is a submartingale. Indeed,
$ \begin{split} &\mathbb{E}_{S_ {\rm{I}}}^{x_0}[u^\varepsilon(x_{k+1})+ \frac12 \varepsilon^2\sum\limits_{n = 0}^{k}f(x_n)|\, x_0, \ldots, x_k] \\ &\geq \inf\limits_{ V\in \mathbb{O}} \ \inf\limits_{\alpha_{i}\in I_\varepsilon^N} \left\{ \frac{1}{N} \sum\limits_{i = 1}^N \frac12 u^\varepsilon (x_k + \varepsilon \sqrt{\alpha_i} v_i) + \frac12 u^\varepsilon (x_k - \varepsilon \sqrt{\alpha_i} v_i) \right\} +\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n) \\ & = u^\varepsilon(x_k)-\frac12\varepsilon^2f(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k}f(x_n) \\ & = u^\varepsilon(x_k)+\frac12\varepsilon^2\sum\limits_{n = 0}^{k-1}f(x_n). \end{split} $ |
Taking infimum over all the strategies $ S_ {\rm{I}} $ we get
$ u_ {\rm{I}}^\varepsilon (x) \geq u^\varepsilon (x). $ |
Given a solution to the DPP we have proved that it coincides with the game value. Then, the game value is characterized as the unique solution to the DPP.
Now our aim is to pass to the limit in the values of the game and obtain that, along a subsequence,
$ u^\varepsilon \to u, \qquad \mbox{as } \varepsilon \to 0 $ |
uniformly in $ \overline{\Omega} $.
To obtain a convergent subsequence $ u^\varepsilon \to u $ we will use again the Arzela-Ascoli type lemma, Lemma 12. So our task now is to show that the family $ u^\varepsilon $ satisfies the hypotheses of the previous lemma.
First, we observe that Lemma 13 still holds here. Hence, we have that there exists a constant $ C = C(\Omega) > 0 $ such that
$ \mathbb{E}_{S_{ {\rm{I}}}}^{x_0} [\tau]\leq C \varepsilon^{-2}. $ |
for every $ \varepsilon > 0 $, any strategy $ S_I $ and $ x_0 \in \Omega $. As an immediate consequence we get that there exists $ C = C(\Omega, f, g) > 0 $ such that
$ \left| {u^\varepsilon (x)} \right| < C $ |
for every $ \varepsilon > 0 $ and every $ x \in \overline{\Omega} $. In fact, using that $ \mathbb{E}_{S_{ {\rm{I}}}, S_ {\rm{II}}}^{x_0} [\tau]\leq C \varepsilon^{-2} $, we have
$ \begin{split} |u^\varepsilon(x_0)| &\leq \inf\limits_{S_{ {\rm{I}}}}\, \mathbb{E}_{S_{ {\rm{I}}}}^{x_0}\left[|g(x_\tau)|+\frac12\varepsilon^2\sum\limits_{n = 0}^{\tau-1}|f(x_n)|\right]\\ & \leq \max g+C \max f. \end{split} $ |
Now we observe that $ u^\varepsilon $ satisfies the second hypothesis of the Arzela-Ascoli type lemma. We want to see that, given $ \eta > 0 $ there are constants $ r_0 $ and $ \varepsilon_0 $ such that for every $ \varepsilon < \varepsilon_0 $ and any $ x, y \in \overline{\Omega} $ with $ |x - y | < r_0 $ it holds
$ |u^\varepsilon (x) - u^\varepsilon (y)| < \eta. $ |
The proof of this estimate is analogous to the proof of Lemma 15. In fact, the case $ x, y \in \Gamma_{\varepsilon\phi(\varepsilon)} $ follows from the uniform continuity of $ g $ in $ \Gamma_{\varepsilon\phi(\varepsilon)} $. As before, since the rules of the game do not depend on the point, the case $ x, y \in \Omega $ follows from the case $ x\in \Omega $ and $ y \in \Gamma_{\varepsilon\phi(\varepsilon)} $; see the proof of Lemma 15. For the case $ x\in \Omega $ and $ y \in \Gamma_{\varepsilon\phi(\varepsilon)} $ we can argue as before considering the projection of $ x_k $ in the direction of the a fixed vector $ w $ of norm 1, choosed as in Lemma 6,
$ \langle x_k-y, w\rangle. $ |
This projection is a martingale. Then, with the same computations used before (see the proof of Lemma 15) se obtain
$ |u_\varepsilon (x) - g(y)| < \eta $ |
if $ |x-y| $ is small enough.
From these computations we have obtained uniform convergence of $ u^\varepsilon $ along a subsequence,
$ u^\varepsilon \to u, \qquad \mbox{ as } \varepsilon \to 0, $ |
uniformly in $ \overline{\Omega} $. Finally, we observe that the uniform limit of $ u^\varepsilon $ is a viscosity solution to the limit PDE problem. This follows from the same computation done in the proof of Theorem 3.3. We observe that Lemmas 7–9 include the case where each eigenvalue is selected once, that is the Monge-Ampère equation. This concludes the proof of Theorem 3.
One can also study a game for Monge-Ampère in which the possible movements are not discrete.
In fact, for a small $ \varepsilon $ and a fixed matrix $ A \in S^N $ consider the following random walk in a bounded domain $ \Omega $, from $ x $ the next position is given by $ x+Ay $ with $ y\in B_{\varepsilon}(0) $ being chosen with uniform distribution in $ B_{\varepsilon}(0) $. If we fix a final payoff function $ g $ in $ \mathbb{R}^N \setminus \Omega $ and we set
$ v^\varepsilon (x) = \mathbb{E}^x (g(x_\tau)) $ |
with $ \tau $ the first time when this random walk leaves the domain ($ \tau $ is a stopping time for this process), then, it follows that $ v^\varepsilon $ verifies
$ v^\varepsilon (x) = {\rlap{-} \smallint }_{B_{\varepsilon}(0)} v^\varepsilon (x+Ay) \, dy $ |
for $ x\in \Omega $ and $ v^\varepsilon(x) = g(x) $ for $ x \in \mathbb{R}^N \setminus \Omega $. These value functions converge to a solution to
$ \left\{ \begin{array}{ll} \text{trace}(A^tD^2u A) = 0 \qquad & \mbox{ in } \Omega, \\[6pt] u = g \qquad & \mbox{ on }\partial \Omega. \end{array} \right. $ |
Now, the game/control problem for Monge-Ampère runs as follows: at each turn the player/controller choose a matrix $ A $ in the set
$ \mathcal{A} = \Big\{ A \in S^{N \times N} : \det A = 1 \mbox{ and } 0 < A\leq \phi(\varepsilon)I \Big\}. $ |
Then the new position of the game goes to $ x+Ay $ with $ y \in B_\varepsilon (0) $ chosen with uniform probability. We add a running payoff $ - \frac{N}{2(N+2)}\, \left(f(x)\right)^{1/N} \varepsilon^2 $ and a final payoff, $ g(x) $. In this case the DPP for the value of the game reads as
$ \begin{equation} u^\varepsilon (x) = \mathop{\mathop{\inf}_{\det A = 1}}_ {A\leq \phi(\varepsilon)I} {\rlap{-} \smallint }_{B_{\varepsilon}(0)} u^\varepsilon (x+Ay) \, dy - \frac{N}{2(N+2)}\, \left(f(x)\right)^{1/N} \varepsilon^2. \end{equation} $ | (5.1) |
This DPP is related to an asymptotic mean value formula for Monge-Ampère; see [3].
With similar ideas as the ones used before (assuming that $ \Omega $ is strictly convex) one can show that
$ u^\varepsilon \to u, \qquad \mbox{ as } \varepsilon \to 0, $ |
where the limit $ u $ is characterized as the unique convex viscosity solution to
$ \left\{ \begin{array}{ll} \mbox{det} (D^2u) = \inf\limits_{{det}(A) = 1} \text{trace}(A^tD^2u A) = f \qquad & \mbox{ in } \Omega, \\ u = g \qquad & \mbox{ on }\partial \Omega. \end{array} \right. $ |
Remark 18. Observe that for $ u^\varepsilon $ to be a solution to (5.1) it must be measurable so that the integral in the right-hand side is well defined. Therefore, in the construction of a solution to the DPP by Perron's method (as in [17]) one has to take into account this measurability issue. The set of sub solutions should be restricted to bounded measurable functions, and we have to check that if $ u $ and $ f $ are bounded measurable functions, then $ T(x) $ given by
$ T(x) = \mathop{\mathop{\inf}\limits_{\begin{array}{c}{\det A = 1}\\ {A\leq \phi(\varepsilon)I} \end{array}}} {\rlap{-} \smallint }_{B_{\varepsilon}(0)} u (x+Ay) \, dy - \frac{N}{2(N+2)}\, \left(f(x)\right)^{1/N} \varepsilon^2 $ |
is also measurable. This fact holds since the only problematic term is the uncountable infimum and we have
$ \begin{equation} \mathop{\mathop \inf\limits_{\begin{array}{c}{\det A = 1}\\ {A\leq \phi(\varepsilon)I}\end{array}}} {\rlap{-} \smallint }_{B_{\varepsilon}(0)} u (x+Ay) \, dy = \inf\limits_{ \begin{array}{c} A\in \mathbb{Q}^{N\times N} \\[-4pt] \det A = 1\\[-4pt] A\leq \phi(\varepsilon)I \end{array} } {\rlap{-} \smallint }_{B_{\varepsilon}(0)} u (x+Ay) \, dy. \end{equation} $ | (5.2) |
The right-hand side is a countable infimum and the equality (5.2) follows from the absolute continuity of the mapping $ E\mapsto \int_E u(y) \, dy $.
P. Blanc partially supported by the Academy of Finland project no. 298641 and PICT-2018-03183 (Argentina). F. Charro partially supported by a Wayne State University University Research Grant, and grants MTM2017-84214-C2-1-P (Spain) and PID2019-110712GB-I100 (Spain), funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe." J. J. Manfredi supported by Simons Collaboration Grants for Mathematicians Award 962828. J. D. Rossi partially supported by CONICET grant PIP GI No 11220150100036CO (Argentina), PICT-2018-03183 (Argentina) and UBACyT grant 20020160100155BA (Argentina).
The authors declare no conflict of interest.
[1] | A. Arroyo, P. Blanc, M. Parviainen, Hölder regularity for stochastic processes with bounded and measurable increments, Ann. Inst. H. Poincare Anal. Non Lineaire, in press. https://doi.org/10.4171/aihpc/41 |
[2] |
I. Birindelli, G. Galise, H. Ishi, Existence through convexity for the truncated Laplacians, Math. Ann., 379 (2021), 909–950. https://doi.org/10.1007/s00208-019-01953-x doi: 10.1007/s00208-019-01953-x
![]() |
[3] | P. Blanc, F. Charro, J. D. Rossi, J. J. Manfredi, A nonlinear mean value property for the Monge-Ampère operator, J. Convex Anal., 28 (2021), 353–386. |
[4] |
P. Blanc, C. Esteve, J. D. Rossi, The evolution problem associated with eigenvalues of the Hessian, J. London Math. Soc., 102 (2020), 1293–1317. https://doi.org/10.1112/jlms.12363 doi: 10.1112/jlms.12363
![]() |
[5] |
P. Blanc, J. D. Rossi, Games for eigenvalues of the Hessian and concave/convex envelopes, J. Math. Pure. Appl., 127 (2019), 192–215. https://doi.org/10.1016/j.matpur.2018.08.007 doi: 10.1016/j.matpur.2018.08.007
![]() |
[6] | P. Blanc, J. D. Rossi, Game theory and partial differential equations, Berlin, Boston: De Gruyter, 2019. https://doi.org/10.1515/9783110621792 |
[7] |
L. Caffarelli, L. Nirenberg, J. Spruck, The Dirichlet problem for nonlinear second order elliptic equations, Ⅰ: Monge-Ampère equations, Commun. Pure Appl. Math., 37 (1984), 369–402. https://doi.org/10.1002/cpa.3160370306 doi: 10.1002/cpa.3160370306
![]() |
[8] |
M. Cirant, K. R. Payne, On viscosity solutions to the Dirichlet problem for elliptic branches of inhomogeneous fully nonlinear equations, Publ. Mat., 61 (2017), 529–575. https://doi.org/10.5565/PUBLMAT6121708 doi: 10.5565/PUBLMAT6121708
![]() |
[9] |
M. G. Crandall, H. Ishii, P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc., 27 (1992), 1–67. https://doi.org/10.1090/S0273-0979-1992-00266-5 doi: 10.1090/S0273-0979-1992-00266-5
![]() |
[10] |
F. Del Teso, J. J. Manfredi, M. Parviainen, Convergence of dynamic programming principles for the $p$-Laplacian, Adv. Calc. Var., 15 (2022), 191–212. https://doi.org/10.1515/acv-2019-0043 doi: 10.1515/acv-2019-0043
![]() |
[11] | A. Figalli, The Monge-Ampère equation and its applications, Zürich: European Mathematical Society, 2017. https://doi.org/10.4171/170 |
[12] |
D. A. Gomes, J. Saude, Mean field games models – A brief survey, Dyn. Games Appl., 4 (2014), 110–154. https://doi.org/10.1007/s13235-013-0099-2 doi: 10.1007/s13235-013-0099-2
![]() |
[13] | C. Gutiérrez, The Monge-Ampère equation, Boston, MA: Birkhäuser Boston, Inc., 2001. |
[14] |
F. R. Harvey, H. B. Lawson, Dirichlet duality and the nonlinear Dirichlet problem, Commun. Pure Appl. Math., 62 (2009), 396–443. https://doi.org/10.1002/cpa.20265 doi: 10.1002/cpa.20265
![]() |
[15] | M. Lewicka, A course on tug-of-war games with random noise, Cham: Springer 2020. https://doi.org/10.1007/978-3-030-46209-3 |
[16] |
M. Lewicka, J. J. Manfredi, D. Ricciotti, Random walks and random tug of war in the Heisenberg group, Math. Ann., 377 (2020), 797–846. https://doi.org/10.1007/s00208-019-01853-0 doi: 10.1007/s00208-019-01853-0
![]() |
[17] |
Q. Liu, A. Schikorra, General existence of solutions to dynamic programming principle, Commun. Pure Appl. Anal., 14 (2015), 167–184. https://doi.org/10.3934/cpaa.2015.14.167 doi: 10.3934/cpaa.2015.14.167
![]() |
[18] |
H. Luiro, M. Parviainen, E. Saksman, Harnack's inequality for p-harmonic functions via stochastic games, Commun. Part. Diff. Eq., 38 (2013), 1985–2003. https://doi.org/10.1080/03605302.2013.814068 doi: 10.1080/03605302.2013.814068
![]() |
[19] |
J. J. Manfredi, M. Parviainen, J. D. Rossi, An asymptotic mean value characterization for $p$-harmonic functions, Proc. Amer. Math. Soc., 138 (2010), 881–889. https://doi.org/10.1090/S0002-9939-09-10183-1 doi: 10.1090/S0002-9939-09-10183-1
![]() |
[20] |
J. J. Manfredi, M. Parviainen, J. D. Rossi, Dynamic programming principle for tug-of-war games with noise, ESAIM: COCV, 18 (2012), 81–90. https://doi.org/10.1051/cocv/2010046 doi: 10.1051/cocv/2010046
![]() |
[21] |
J. J. Manfredi, M. Parviainen, J. D. Rossi, On the definition and properties of $p$-harmonious functions, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 11 (2012), 215–241. https://doi.org/10.2422/2036-2145.201005_003 doi: 10.2422/2036-2145.201005_003
![]() |
[22] |
A. M. Oberman, L. Silvestre, The Dirichlet problem for the convex envelope, Trans. Amer. Math. Soc., 363 (2011), 5871–5886. https://doi.org/10.1090/S0002-9947-2011-05240-2 doi: 10.1090/S0002-9947-2011-05240-2
![]() |
[23] |
M. Parviainen, E. Ruosteenoja, Local regularity for time-dependent tug-of-war games with varying probabilities, J. Differ. Equations, 261 (2016), 1357–1398. https://doi.org/10.1016/j.jde.2016.04.001 doi: 10.1016/j.jde.2016.04.001
![]() |
[24] |
Y. Peres, O. Schramm, S. Sheffield, D. B. Wilson, Tug-of-war and the infinity Laplacian, J. Amer. Math. Soc., 22 (2009), 167–210. https://doi.org/10.1090/S0894-0347-08-00606-1 doi: 10.1090/S0894-0347-08-00606-1
![]() |
[25] |
Y. Peres, S. Sheffield, Tug-of-war with noise: a game theoretic view of the $p$-Laplacian, Duke Math. J., 145 (2008), 91–120. https://doi.org/10.1215/00127094-2008-048 doi: 10.1215/00127094-2008-048
![]() |
[26] | A. V. Pogorelov, Monge-Ampère equations of elliptic type, Noordhoff, 1964. |
[27] |
E. Ruosteenoja, Local regularity results for value functions of tug-of-war with noise and running payoff, Adv. Calc. Var., 9 (2016), 1–17. https://doi.org/10.1515/acv-2014-0021 doi: 10.1515/acv-2014-0021
![]() |
[28] | H. V. Tran, Hamilton-Jacobi equations–theory and applications, American Mathematical Society, 2021. https://doi.org/10.1090/gsm/213 |
1. | Pablo Blanc, Mikko Parviainen, Julio D. Rossi, A bridge between convexity and quasiconvexity, 2025, 0933-7741, 10.1515/forum-2024-0190 |