Loading [Contrib]/a11y/accessibility-menu.js
Research article Special Issues

On the convergence rates of discrete solutions to the Wave Kinetic Equation

  • In this paper, we consider the long-term behavior of some special solutions to the Wave Kinetic Equation. This equation provides a mesoscopic description of wave systems interacting nonlinearly via the cubic NLS equation. Escobedo and Velázquez showed that, starting with initial data given by countably many Dirac masses, solutions remain a linear combination of countably many Dirac masses at all times. Moreover, there is convergence to a single Dirac mass at long times. The first goal of this paper is to give quantitative rates for the speed of said convergence. In order to study the optimality of the bounds we obtain, we introduce and analyze a toy model accounting only for the leading order quadratic interactions.

    Citation: Michele Dolce, Ricardo Grande. On the convergence rates of discrete solutions to the Wave Kinetic Equation[J]. Mathematics in Engineering, 2024, 6(4): 536-558. doi: 10.3934/mine.2024022

    Related Papers:

    [1] Lorenzo Pistone, Sergio Chibbaro, Miguel D. Bustamante, Yuri V. Lvov, Miguel Onorato . Universal route to thermalization in weakly-nonlinear one-dimensional chains. Mathematics in Engineering, 2019, 1(4): 672-698. doi: 10.3934/mine.2019.4.672
    [2] Nickolas Giardetti, Amy Shapiro, Stephen Windle, J. Douglas Wright . Metastability of solitary waves in diatomic FPUT lattices. Mathematics in Engineering, 2019, 1(3): 419-433. doi: 10.3934/mine.2019.3.419
    [3] Michael Herrmann, Karsten Matthies . Solitary waves in atomic chains and peridynamical media. Mathematics in Engineering, 2019, 1(2): 281-308. doi: 10.3934/mine.2019.2.281
    [4] Xavier Fernández-Real, Alessio Figalli . On the obstacle problem for the 1D wave equation. Mathematics in Engineering, 2020, 2(4): 584-597. doi: 10.3934/mine.2020026
    [5] Hao Zheng . The Pauli problem and wave function lifting: reconstruction of quantum states from physical observables. Mathematics in Engineering, 2024, 6(4): 648-675. doi: 10.3934/mine.2024025
    [6] Lars Eric Hientzsch . On the low Mach number limit for 2D Navier–Stokes–Korteweg systems. Mathematics in Engineering, 2023, 5(2): 1-26. doi: 10.3934/mine.2023023
    [7] Jonathan A. D. Wattis . Asymptotic approximations to travelling waves in the diatomic Fermi-Pasta-Ulam lattice. Mathematics in Engineering, 2019, 1(2): 327-342. doi: 10.3934/mine.2019.2.327
    [8] Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025
    [9] Plamen Stefanov . Conditionally stable unique continuation and applications to thermoacoustic tomography. Mathematics in Engineering, 2019, 1(4): 789-799. doi: 10.3934/mine.2019.4.789
    [10] Biagio Cassano, Lucrezia Cossetti, Luca Fanelli . Spectral enclosures for the damped elastic wave equation. Mathematics in Engineering, 2022, 4(6): 1-10. doi: 10.3934/mine.2022052
  • In this paper, we consider the long-term behavior of some special solutions to the Wave Kinetic Equation. This equation provides a mesoscopic description of wave systems interacting nonlinearly via the cubic NLS equation. Escobedo and Velázquez showed that, starting with initial data given by countably many Dirac masses, solutions remain a linear combination of countably many Dirac masses at all times. Moreover, there is convergence to a single Dirac mass at long times. The first goal of this paper is to give quantitative rates for the speed of said convergence. In order to study the optimality of the bounds we obtain, we introduce and analyze a toy model accounting only for the leading order quadratic interactions.



    In recent years, there has been an increasing interest in understanding the average behavior of out-of-equilibrium systems of many waves undergoing weakly nonlinear interactions. A fundamental example of such a system is given by the cubic Schrödinger equation.

    The kinetic formalism for such wave system, known as wave kinetic theory, consists in studying the evolution of the variance of the Fourier coefficients of such wave systems in the kinetic limit (i.e., as their size grows and the strength of the interactions diminishes), see [15] for details. This variance, upon rescaling in time, has been shown to satisfy the Wave Kinetic Equation (WKE):

    $ \begin{equation} \partial_t n (t,\xi) = \mathcal{K} (n(t,\cdot )), \qquad \xi\in{\mathbb R}^3, \end{equation} $ (1.1)

    where

    $ \mathcal{K}(n)(\xi) = \int_{\substack{({\mathbb R}^3)^2\\ \{\xi = \xi_1-\xi_2+\xi_3\}}}\delta_{{\mathbb R}}(|\xi_1|^2 - |\xi_2|^2+|\xi_3|^2-|\xi|^2)\, n_1 n_2 n_3 n \left( \frac{1}{n} - \frac{1}{n_1} + \frac{1}{n_2} - \frac{1}{n_3} \right) \, d\xi_1\, d\xi_3 $

    with $ n_j = n(\xi_j) $.

    Kinetic equations for wave systems first appeared in the work of Peierls [17], Nordheim [16] and in the work of Hasselman in the context of water waves [11,12]. A rigorous mathematical derivation was only recently achieved, starting with the work of Buckmaster, Germain, Hani, Shatah [1], that of Collot and Germain [2], and culminating with the recent works of Deng and Hani [3,4,5,6,7], where a full derivation is obtained. Other wave systems have also recently been considered, see for instance [10,18].

    Despite the rigorous justification of the WKE, many questions remain unanswered regarding the behavior of solutions to (1.1). The study of the well-posedness and long-term behavior of certain solutions to the WKE (1.1) was initiated by Escobedo and Velázquez in [8], as well as in [9]. In their work, they consider radial initial data in 3D, where one can explicitly integrate the delta function in (1.1) in the angular variables, leading to the isotropic WKE:

    $ \begin{equation} \begin{split} \partial_t g_1& = \int_{D(\omega_1)} \Phi \left[ \left(\frac{g_1}{\sqrt{ \omega_1}}+\frac{g_2}{\sqrt{ \omega_2}}\right)\frac{g_3g_4}{\sqrt{ \omega_3 \omega_4}}-\left(\frac{g_3}{\sqrt{ \omega_3}}+\frac{g_4}{\sqrt{ \omega_4}}\right)\frac{g_1g_2}{\sqrt{ \omega_1 \omega_2}} \right]d \omega_2d \omega_3d \omega_4,\\ g_1 \mid_{t = 0} & = g^{in}( \omega_1), \qquad g_i d \omega_i = g(t,d \omega_i),\\ \Phi& = \min\{ \sqrt{\omega}_1, \sqrt{\omega}_2, \sqrt{\omega}_3, \sqrt{\omega}_4\},\\ D( \omega_1)& = \{ \omega_3\geq 0, \omega_4\geq 0; \omega_3+ \omega_4\geq \omega_1\}, \qquad \omega_1\geq 0. \end{split} \end{equation} $ (1.2)

    Here $ g(t, \omega) = |\xi|\, n(t, |\xi|^2) $ and $ \omega = |\xi|^2 $, where $ n $ is the solution to (1.1). It is convenient to work with $ g $ so that it can be interpreted as a density of particles in the space $ \{\omega\geq 0\} $ [8]. One can thus define

    $ \text{Mass:} \quad M = \int_{\mathbb{R}_+}g(t,d \omega), \qquad \text{Energy:} \quad E = \int_{\mathbb{R}_+}\omega g(t,d \omega). $

    Both quantities above are conserved as long as they are both initially finite.

    Escobedo and Velázquez then consider the weak formulation of the Eq (1.2), namely

    $ \begin{equation} \begin{split} \frac{d}{dt} \left( \int_{{\mathbb R}_{+}} \varphi(t, \omega) g(t,d \omega) \right) = & \ \int_{{\mathbb R}} \partial_t \varphi(t,\omega)\, g(t,d\omega) \\ & + \int_{{\mathbb R}_{+}^3} \Phi \, \frac{g_1 g_2 g_3}{\sqrt{ \omega_1 \omega_2 \omega_3}} \,[\varphi_4 +\varphi_3 - \varphi_2 -\varphi_1]\, d \omega_1 d \omega_2 d \omega_3,\\ \omega_4 = & \ \omega_1+ \omega_2- \omega_3\geq 0, \qquad \varphi_i = \varphi(\omega_i), \end{split} \end{equation} $ (1.3)

    almost everywhere for any test function $ \varphi\in C^2_c ([0, T)\times {\mathbb R}_{+}) $.

    One may show that (1.3) is globally well-posed in a space of Radon measures $ \mathcal{M}_{\rho} $ defined in (1.13) below. Moreover, it is possible to study the long-term behavior of such measure solutions and, among other results, Escobedo and Velázquez prove that [8]:

    ● If the (conserved) mass $ M = \int_{\mathbb{R}^+} g^{in}d\omega $ is finite, then

    $ \begin{equation} g(t,\cdot)\rightharpoonup \delta_{R^*} \qquad \mbox{as}\ t\rightarrow \infty, \end{equation} $ (1.4)

    with $ R^* = \mathrm{inf} A^* $ where the set $ A^* $ is defined in (2.6)-(2.7).

    ● If $ \mathrm{supp}(g^{in})\subseteq \mathbb{N} $, then

    $ \begin{equation} \mathrm{supp}(g(t,\cdot))\subseteq\mathbb{N} \qquad \mbox{for any}\ t > 0. \end{equation} $ (1.5)

    The goal of this paper is to quantify the speed of the convergence in (1.4) in a special case where we start with energy concentrated on a set of discrete frequencies away from the origin. In this setting, we first observe an instantaneous spreading of energy towards all (discrete) frequencies, before the solution converges to a single Dirac mass concentrated at $ R^* $. These complicated dynamics were first studied qualitatively in [8] in a more general scenario. The purpose of this article is to present some quantitative results in the special case of initial data displaying discrete dynamics.

    We consider the case $ \mathrm{supp}(g^{in})\subseteq \mathbb{N} $. It is therefore useful to introduce a set of test functions $ \varphi_n \in C^\infty_c(\mathbb{R}_+) $ such that $ \mathrm{supp}(\varphi_n)\subset B(n, 1/2) $ and define

    $ \begin{equation} F_n (t): = \int_{{\mathbb R}_{+}} \varphi_n ( \omega ) g(t,d \omega) . \end{equation} $ (1.6)

    Since $ \mathrm{supp}(g(t, \cdot))\subseteq \mathbb{N} $ by (1.5), the functions $ F_n $ fully describes the dynamics of $ g $ solving (1.3). By a direct inspection of the weak formulation (1.3), we will derive the (infinite) system of ODE's describing the evolution of $ F_n $. The particular structure of the equations is not relevant for the statement of the main result, but the equations of interest can be found in Lemma 2.4. Our first result, proved in Section 3, is the following:

    Theorem 1.1. Let $ g^{in} = M_1\delta_1+\sum_{j = 2}^{\infty} m_j\delta_j $ be the initial data of (1.2) with $ M_1 > 0 $ and $ (m_j)_{j = 2}^{\infty}\in \ell^{1, r}({\mathbb R}_{+}) $ with $ r > 1 $ (see (1.12)). Define

    $ M_2: = \sum\limits_{j = 2}^{\infty} m_j, \qquad E = \sum\limits_{j = 1}^{\infty} j\, m_j = : M_1+E_2, $

    and, without loss of generality, assume $ M_1+M_2 = 1 $. Then, there exists $ t_0 > 0 $ such that for any $ t > t_0 $ the following inequalities hold true:

    $ \begin{align} \frac{c_2}{t-t_0+3E/b_1(t_0)+C_2}\leq F_2 (t) \leq 1-F_1(t)\leq \frac{c_1(t_0)}{\sqrt{b_1(t_0)(t-t_0)+3E}}, \end{align} $ (1.7)

    where $ c_1(t_0), \, b_1(t_0) $ are given in (3.2) whereas $ c_2, C_2 $ are explicitly computable. Moreover, if $ M_1\geq 3E_2/19 $ then $ t_0 = 0 $.

    Notice that, being the mass conserved, we have

    $ F_k(t)\leq \sum\limits_{j = 2}^{\infty} F_j(t) = 1-F_1(t) $

    for all $ k\geq 2 $. Therefore, the upper bound in Theorem 1.1 is true for all $ F_k $ with $ k\geq 2 $, whereas we are only able to prove the lower bound for $ F_2 $.

    Thanks to the result above, we know that, if we wait long enough or we start with $ M_1 $ large enough, the convergence towards the Dirac mass at $ \{1\} $ is at least $ \mathcal{O}(t^{-1/2}) $ but cannot be faster than $ \mathcal{O}(t^{-1}) $.

    Remark 1.2. If we set $ t_0 = 0 $ but $ M_1 < 3E_2/19 $ then we need to change the lower bound in (1.7) to $ {\mathcal O} (t^{-\alpha}) $ with a time-rate $ \alpha > 1 $, explicitly given in (3.4). This suggests that the rate of convergence could be faster for small times and slow down as mass concentrates at $ \{1\} $. Note that the convergence is at most polynomial for all times.

    Remark 1.3. If the lower bound for $ 1-F_1 $ in (1.7) were sharp, then one could derive lower and upper bounds for the speed of convergence of all the functions $ F_n $ for $ n $ large enough. In such case, they would decay as $ {\mathcal O}_n (t^{-1}) $. This is the content of Proposition 3.4 in Section 3.

    In order to understand whether there exist solutions exhibiting a decay that saturate the lower bound in (1.7), we propose a toy model where we only keep the terms in (1.3) involving at least one interaction with the leading term $ F_1 $. Moreover, we replace all terms $ F_1 $ by its limit $ 1 $. In Section 4, we show that these reductions give rise to the following quadratic toy model:

    $ \begin{equation} \frac{d}{dt}F_n = 4\, \frac{F_n}{\sqrt{n}} \, \frac{F_{2n-1}}{\sqrt{2n-1}} - 4\, \frac{F_n}{\sqrt{n}}\, \sum\limits_{k = 2}^{n-1} \frac{F_k}{\sqrt{k}}- 2 \left(\frac{F_n}{\sqrt{n}}\right)^2+ 2\sum\limits_{k = n+1}^{\infty} \frac{F_kF_{k+1-n}}{\sqrt{k(k+1-n)}}. \end{equation} $ (1.8)

    We then look for self-similar solutions of the form

    $ \begin{equation} F_n(t) = \beta_n\, \frac{\sqrt{n}}{t}, \qquad \text{for } \beta_n\in [0,\infty),\ 2\leq n\in {\mathbb N}. \end{equation} $ (1.9)

    Analogous ansatzs in the continuous setting are common in the literature, see for instance the related works of Kierkels and Velázquez [13,14] in the WKE context.

    In such a toy model, positivity of solutions cannot be expected to hold anymore, unlike in (1.3). In fact, the question of existence of solutions (1.9) with strictly positive $ \beta_n $ remains an interesting open question, which we answer with a further reduction. Indeed, our next result consists of showing the existence of strictly positive solutions for a truncated version of (1.8), since positivity is a key and physical feature of the solutions to the full WKE (1.2). In Section 4, we show that simple truncations of (1.8) do not admit positive solutions. We thus consider:

    $ \begin{equation} -\sqrt{n}\, \beta_n = 4\, \beta_n \, \beta_{2n-1} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1}\beta_k - 2 \beta_n^2 + 2\sum\limits_{k = n+1}^{3N-2} \beta_k \beta_{k+1-n}, \qquad \mbox{for}\ n = 2,\ldots, N, \end{equation} $ (1.10)

    where $ N\in{\mathbb N} $, $ N\geq 2 $, may be as large as desired. We then have the following:

    Theorem 1.4. Fix any $ N\in{\mathbb N} $ with $ N\geq 4 $. Fix $ \lambda_1, \lambda_2 > 0 $ such that $ \lambda_1\lambda_2 > N/4 $. Then there exists some $ \delta_0 (N, \lambda_1, \lambda_2) > 0 $ such that for all $ \delta < \delta_0 $, there exists a solution to (1.10) (which solves (1.8) for $ n\leq N $ with the ansatz (1.9)) such that

    $ \begin{equation} \begin{split} \beta_2 & = \sqrt{\lambda_1\lambda_2+ 1/8}+ \sqrt{2}/4 + {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda_1,\lambda_2\}),\\ \beta_{2N} & = \lambda_1 + {\mathcal O} (\delta),\\ \beta_{2N+1} & = \lambda_2 + {\mathcal O} (\delta),\\ 0 < \beta_j & = {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda_1,\lambda_2\}), \qquad j = 3,\ldots,N,\\ 0 < \beta_j & = {\mathcal O} ( \delta), \qquad j = 2N+2,\ldots,3N-2,\\ \beta_{j} & = 0, \qquad \mathit{\mbox{otherwise}}. \end{split} \end{equation} $ (1.11)

    The result in the theorem above is an example of a solution whose leading order terms are $ \beta_2, \beta_{2N}, \beta_{2N+1} $. This suggests that there could be self-similar solutions to (1.8) of the form (1.9). In fact, our long-term goal is not only to construct such solutions on the toy model (1.8), but to carry out a nonlinear perturbative argument for the full (1.3) around this "approximate" solution found in Theorem 1.4.

    The article is organized as follows: In Section 2, we discuss some background results and present the discrete WKE associated to (1.3) with initial data given by a linear combination of Dirac masses. In Section 3 we prove Theorem 1.1. In Section 4, we introduce our toy model, discuss possible truncations and prove Theorem 1.4. Finally, in Section 5, we give the derivation the discrete WKE from (1.3).

    The set of natural numbers $ {\mathbb N} $ is taken without $ 0 $, namely $ {\mathbb N} = \{1, 2, \dots\} $, and we define $ {\mathbb R}_{+} = [0, \infty) $. When the index set in a sum consists only of non-positive indices, we consider that sum to be zero, e.g., $ \sum_{k = 1}^{n-1} a_k = 0 $ whenever $ n\leq 1 $.

    We let $ \ell^{1, r}({\mathbb R}_{+}) $, $ r\geq 0 $, be the Banach space of sequences $ (m_j)_{j = 1}^{\infty} $, $ m_j\geq 0 $, with the norm:

    $ \begin{equation} \sum\limits_{j = 1}^{\infty} j^r\,m_j < \infty. \end{equation} $ (1.12)

    Finally, we consider the space $ \mathcal{M}_{\rho} $ of non-negative Radon measures $ \mu $ such that

    $ \begin{equation} \left\lVert {\mu} \right\rVert_{\rho} = \sup\limits_{R > 1} \frac{1}{(1+R)^{\rho}}\, \frac{1}{R}\,\int_{R/2}^R \mu(d\omega) + \int_0^1 \mu(d\omega) < \infty. \end{equation} $ (1.13)

    In this note we will consider initial data $ \mu_0 $ in $ \mathcal{M}_{\rho} $ with some $ \rho < -2 $, which guarantees a finite and conserved energy. This is equivalent to requiring $ (m_j)_{j\in{\mathbb N}}\in\ell^{1, r}({\mathbb R}_{+}) $ with $ r > 1 $, as stated in Theorem 1.1.

    We will often omit the differential in some integrals when it is clear from the context, e.g.,

    $ \int_0^{\infty} \mu(t,d \omega) = \int_0^{\infty} \mu(t). $

    In this section, we summarize a few results in [8] which will be useful in the rest of the paper. Furthermore, we present the equations satisfied by $ F_n $ defined in (1.6). A full proof of the derivation, which is technically simple yet computationally tedious, is given in Section 5.

    First of all, by [8, Proposition 2.28] we know that if $ g^{in}\in \mathcal{M}_{\rho} $ with $ \rho < -2 $ then the weak solution to (1.2) has conserved and finite energy and mass for all times. As mentioned after (1.13), we know that our initial data in Theorem 1.1 is in $ \mathcal{M}_\rho $ with $ \rho < -2 $ and therefore we always have finite and conserved mass and energy.

    Then we need two key results that will allow us to prove bounds on $ F_1(t) $, and they are the foundation of our analysis for $ F_n (t) $ for $ n > 1 $. The first one is a combination of [8, Proposition 2.22] and [8, Lemma 2.25].

    Lemma 2.1. Suppose that $ g\in \mathcal{M}_{\rho} $. Then, for any $ \varphi\in C_b^2 ({\mathbb R}_{+}) $,

    $ \begin{equation} \int_{{\mathbb R}_{+}^3} \Phi\, \left[ \left(\frac{g_1}{\sqrt{ \omega_1}}+\frac{g_2}{\sqrt{ \omega_2}}\right)\frac{g_3g_4}{\sqrt{ \omega_3 \omega_4}}-\left(\frac{g_3}{\sqrt{ \omega_3}}+\frac{g_4}{\sqrt{ \omega_4}}\right)\frac{g_1g_2}{\sqrt{ \omega_1 \omega_2}} \right] \, \varphi_1 = \int_{{\mathbb R}_{+}^3} \frac{g_1g_2g_3}{\sqrt{ \omega_1 \omega_2 \omega_3}} \mathcal{G}_{\varphi} \end{equation} $ (2.1)

    where both integrals are in $ d\omega_1d\omega_2d\omega_3 $ and the following notation is used:

    $ \begin{align} \mathcal{G}_{\varphi} ( \omega_1, \omega_2, \omega_3) & = \frac{1}{3} \, \left[\sqrt{ \omega_{-}} \mathcal{H}^1_{\varphi}( \omega_1, \omega_2, \omega_3) + \sqrt{( \omega_0+ \omega_{-}- \omega_{+})_{+}} \, \mathcal{H}^2_{\varphi} ( \omega_1, \omega_2, \omega_3)\right], \end{align} $ (2.2)
    $ \begin{align} \mathcal{H}_{\varphi}^1 ( \omega_1, \omega_2, \omega_3)& = \varphi( \omega_{+}+ \omega_{0}- \omega_{-}) +\varphi( \omega_{-}+ \omega_{+}- \omega_{0}) - 2\varphi( \omega_{+}), \end{align} $ (2.3)
    $ \begin{align} \mathcal{H}_{\varphi}^2 ( \omega_1, \omega_2, \omega_3)& = \varphi( \omega_{+}) + \varphi( \omega_{-}+ \omega_{0}- \omega_{+}) - \varphi( \omega_{0}) - \varphi( \omega_{-}), \end{align} $ (2.4)

    and

    $ \begin{align*} \omega_{+} ( \omega_1, \omega_2, \omega_3 ) & = \max \{ \omega_1, \omega_2, \omega_3\},\qquad \omega_{-} ( \omega_1, \omega_2, \omega_3 ) = \min \{ \omega_1, \omega_2, \omega_3\},\\ \omega_{0} ( \omega_1, \omega_2, \omega_3 ) & = \{ \omega_1, \omega_2, \omega_3\}-\{ \omega_{+}, \omega_{-}\}. \end{align*} $

    Moreover, if $ \varphi $ is convex we have that $ \mathcal{G}_{\varphi}\geq 0 $.

    Let $ g\in \mathcal{M}_\rho $ be a weak solution to (1.3) and $ \varphi\in C({\mathbb R}_{+}) $ be a convex function. Then

    $ \begin{equation} \frac{d}{dt} \left( \int_0^{\infty} \varphi( \omega) g(t,d \omega) \right) \geq 0, \qquad \mathit{\mbox{for a.e.}}\ t\geq 0. \end{equation} $ (2.5)

    The nice monotonicity formula (2.1) is a direct computation using the symmetries of the Eq (1.3). The proof can be found in [8, Proposition 2.22].

    The next result guarantees that if we consider initial data $ g^{in} $ with discrete support, the dynamics will be discrete and nontrivial for later times. Before we state these results, we define some auxiliary sets to identify $ \mathrm{supp}(g(t)) $, with $ g $ being the solution to (1.3). Let $ A_1 = \mbox{supp}(g^{in}) $, define $ A_n $ inductively as:

    $ \begin{equation} A_{n+1} = \{x+y-z \, :\, x,y,z \in A_n\}\cap (0,\infty). \end{equation} $ (2.6)

    The idea behind these sets is the following: The wave interactions captured by the WKE (1.3) are those between waves of different frequencies satisfying $ \omega_4 = \omega_1+\omega_2-\omega_3 $. As a result, waves with frequencies in the set $ A_n $ will produce waves with frequencies in the set $ A_{n+1} $. In order to consider a set that includes all possible frequencies, we cannot stop this process at any finite $ n $ and it is therefore natural to define:

    $ \begin{equation} A^{\ast} = \bigcup\limits_{n = 1}^{\infty} A_n . \end{equation} $ (2.7)

    Notice that, if we start with a finite number of Dirac masses, namely

    $ g^{in} = M_1 \delta_1 + \sum\limits_{j = 2}^{N_0}m_j \delta_j $

    for some finite $ N_0\geq 2 $, it is easy to show that

    $ \begin{equation} A_1 = \{1,2,\dots,N_0\}\quad \Longrightarrow \quad A^{\ast} = {\mathbb N}. \end{equation} $ (2.8)

    We are now ready to state the following:

    Lemma 2.2 (Lemma 3.5 in [8]). Let $ \rho < -1 $, $ g^{in}\in \mathcal{M}_{\rho} $ and let $ g $ be the weak solution to the WKE (1.3). Suppose that $ M = \int_{{\mathbb R}_{+}} g^{in}(d \omega) > 0 $. Then for any $ x\in A^{\ast} $, any $ t > 0 $ and any $ r > 0 $ we have that

    $ \int_{B(r,x)} g(t,d \omega) > 0. $

    The lemma above guarantees that $ {\mathbb N} = A^{\ast} \subset \mbox{supp} (g(t)) $ for all times $ t > 0 $. In order to show the opposite inclusion, whence proving that $ \mbox{supp} (g(t)) = {\mathbb N} $, we need the following result.

    Lemma 2.3 (Lemma 3.8 in [8]). Let $ \rho < -1 $, $ g^{in}\in \mathcal{M}_{\rho} $ and let $ g $ be the weak solution to the WKE (1.3). Suppose that $ M = \int_{{\mathbb R}_{+}} g^{in}(d \omega) > 0 $ and that $ \inf A^{\ast} > 0 $. Then $ \mathit{\mbox{supp}}(g(t))\subset \overline{A^{\ast}} $ for any $ t\geq 0 $.

    The dynamics of our problem are therefore discrete. Thanks to Lemmas 2.2 and 2.3, we know that the functions

    $ \begin{equation} F_n (t) = \int_{{\mathbb R}_{+}}\varphi_n(\omega)g(t,d \omega) = \int_{\{n\}} g(t,d \omega), \qquad n\geq 1, \end{equation} $ (2.9)

    as introduced in (1.6), fully capture the dynamics of the problem. Thanks to the conservation of mass, we may assume that our solutions have unit mass. Then the conserved energy yields:

    $ \begin{equation} \sum\limits_{n = 1}^{+\infty}F_n(t) = M_1+M_2 = 1, \qquad E = \sum\limits_{n = 1}^{\infty}nF_n(t) = M_1+E_2, \end{equation} $ (2.10)

    where $ M_2, E_2 $ are as defined in Theorem 1.1. In order to prove lower bounds on $ F_2 $, it is convenient to define

    $ \begin{equation} H_n(t): = (F_n(t))^{-1}, \qquad t > 0. \end{equation} $ (2.11)

    We know these functions are well defined, since Lemma 2.2 guarantees that $ F_n(t) > 0 $ for all $ t > 0 $ and $ n\in{\mathbb N} $. Next we state the (infinite) system of ODE's for these quantities.

    Lemma 2.4. Let $ F_n, H_n $ be respectively given in (1.6) and (2.11). Then

    $ \begin{align} \frac{d}{dt}F_n& = F_n (Q_n-U_n)-F_n^2L_n+C_n, \end{align} $ (2.12)
    $ \begin{align} \frac{d}{dt}H_n& = H_n (U_n-Q_n)+L_n-H_n^2C_n \end{align} $ (2.13)

    where the terms $ L_n, Q_n, U_n, C_n $ are defined as follows:

    $ \begin{align} L_n = &\frac{2}{n}F_1+\frac{2}{n}\left(\sum\limits_{k = 2}^{n-1}F_k+\sum\limits_{k = n+1}^{2n-1}\frac{F_k\sqrt{2n-k}}{\sqrt{k}}\right); \end{align} $ (2.14)
    $ \begin{align} Q_n = &\ \frac{4}{\sqrt{n}}\frac{F_1F_{2n-1}}{\sqrt{2n-1}}+\sum\limits_{k = n+1}^{\infty}\frac{F_k^2}{k}+\frac{1}{\sqrt{n}}\sum\limits_{k = \lceil \frac{n}{2}\rceil }^{n-1}\frac{F_k^2}{k}\sqrt{2k-n} \end{align} $ (2.15)
    $ \begin{align} &+\frac{2}{\sqrt{n}}\left(2\sum\limits_{m = 2}^{n-1}\frac{F_mF_{2n-m}}{\sqrt{2n-m}}+\sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\sum\limits_{m = n+1-k}^{k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{k+m-n}\right); \\ U_n = &\ \frac{4}{\sqrt{n}} F_1\sum\limits_{k = 2}^{n-1}\frac{F_k}{\sqrt{k}}+\frac{2}{\sqrt{n}}\left(\sum\limits_{k = 2}^{n-1}\sum\limits_{m = n+1}^{n+k}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}+2\sum\limits_{k = 2}^{n-1}\sum\limits_{m = 2}^{k-1}\frac{F_kF_m}{\sqrt{k}}\right) \end{align} $ (2.16)
    $ \begin{align} &\ +\frac{2}{\sqrt{n}}\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = k+1}^{n+k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m};\\ C_n = &\ 2F_1\sum\limits_{k = n+1}^{\infty}\frac{F_kF_{k+m-n}}{\sqrt{k(k+m-n)}}+\sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\frac{F_k^2F_{2k-n}}{k} \end{align} $ (2.17)
    $ \begin{align} &\ +2\left(\sum\limits_{k = 2}^{n-1}\sum\limits_{m = 2}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{kn}}+\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 2}^{n-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{k(k+m-n)}}\right)\\ \notag &\ +\sqrt{n}\left(2\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{km(k+m-n)}}+\sum\limits_{k = n+1}^{\infty}\frac{F_k^2F_{2k-n}}{k\sqrt{2k-n}}\right). \end{align} $

    The proof of Lemma 2.4 is a long albeit elementary computation, and so we postpone it to Section 5.

    Remark 2.5. In the definitions of $ L_n, Q_n, U_n, C_n $, we have isolated the terms containing $ F_1 $ that appear as first terms on the right-hand side. For the long-term behavior, one should have in mind that $ F_1 = 1 $ up to small errors. Therefore when deriving a toy model or when doing a perturbative argument, it is natural to replace $ F_1 $ by $ 1 $ and consider all the other terms as lower order terms to be neglected (more precisely, one would hope to bootstrap a suitable smallness condition).

    In this section, we prove Theorem 1.1 and Proposition 3.4. We start by studying the function $ F_1 $. In particular, we want to be as quantitative as possible since, in view of the conservation of the mass (2.10), bounds on $ F_1 $ will yield a priori bounds for the rest of the $ F_n $, $ n\geq 2 $.

    Proposition 3.1. Let $ M_1, M_2\geq 0 $ be such that $ M_1+M_2 = 1 $. Then for $ t > t_0 $ we have

    $ \begin{align} &1-F_1(t)\leq \frac{c_1(t_0)}{\sqrt{b_1(t_0)\,(t-t_0)+3E}}, \end{align} $ (3.1)
    $ \begin{align} &c_1(t_0) = \sqrt{3E}(1-F_1(t_0)), \qquad b_1(t_0) = 2F_1(t_0)(1-F_1(t_0))^2. \end{align} $ (3.2)

    When $ t_0 = 0 $, notice that $ c_1(0) = :c_1 = \sqrt{3E}(1-M_1) $ and $ b_1(0) = :b_1 = 2M_1 (1-M_1)^2 $. Using (3.1), we readily obtain the following:

    Corollary 3.2. For $ t > t_0 $ we have that

    $ \begin{equation} 1-F_1(t)\geq F_2 (t) \geq \frac{c_2}{(t-t_0+3E/b_1(t_0))^{\alpha}+C_2}, \end{equation} $ (3.3)

    where $ c_2, C_2 $ can be explicitly computed and

    $ \begin{equation} \alpha = \max\left\{1, \frac{3E}{16F_1(t_0)}\right\}. \end{equation} $ (3.4)

    Remark 3.3. Given that $ F_1(t)\rightarrow 1 $ as $ t\rightarrow \infty $ monotonically, one can always choose $ t_0 $ large enough so that

    $ \begin{equation} \frac{3E}{16F_1(t_0)}\leq 1\quad \Longrightarrow \quad F_1(t_0)\geq \frac{3E}{16}. \end{equation} $ (3.5)

    This is telling us that a lower bound with a rate $ \mathcal{O}(t^{-\alpha}) $ with $ \alpha > 1 $ cannot be sustained for all times. Moreover, since $ E = E_2-M_1 $, if we start with $ M_1\geq 3E_2/19 $ then we can take $ t_0 = 0 $ and $ \alpha = 1 $.

    The proof of Theorem 1.1 directly follows by combining Proposition 3.1 with Corollary 3.2. Therefore, we just prove the latter results. The start of the proof of Proposition 3.1 is similar to Theorem 3.2 in [8]. The convergence result presented in Theorem 3.2 in [8] is correct. However, the rate of convergence one could derive from the last differential inequality in the proof of Theorem 3.2 in [8] (see the second equality at page 53 in [8]) does not hold due to an error in said inequality. We fix this error as part of the proof of Proposition 3.1. In fact, if the convergence rate that can be deduced from [8] were true, we would be able to prove that $ 1-F_1(t) = \mathcal{O}((t-t_0)^{-1}) $ and therefore Proposition 3.4 would not be a conditional result.

    Proof of Proposition 3.1. To obtain the bound for $ F_1 $, namely (1.7), we follow the argument in the proof of [8, Theorem 3.2]. In particular, choose the test function:

    $ \begin{equation} \varphi_{1}( \omega): = \left(3-2 \omega\right)_+. \end{equation} $ (3.6)

    Notice that $ \mathrm{supp}(\varphi_{1})\subset[0, 3/2] $ which implies $ F_1(t) = \int \varphi_{1}(\omega) g(t) $.

    Given that $ \varphi_1 $ in (3.6) is convex and that $ g $ solves (1.3), we apply Lemma 2.1 to get

    $ \begin{equation} F_1'(t) = \frac{d}{dt} \left( \int_{\{1\}} g(t) \right) \geq \int_{{\mathbb R}_{+}^3} \frac{g_1g_2g_3}{\sqrt{ \omega_1 \omega_2 \omega_3}} \mathcal{G}_{\varphi_1}, \end{equation} $ (3.7)

    where we omit the explicit dependencies on $ t, d \omega_j $ to ease the notation. Recalling the definitions of $ \mathcal{G}_{\cdot}, \mathcal{H}_{\cdot}^{1}, \mathcal{H}_{\cdot}^2 $ in (2.2)-(2.3), we claim that

    $ \begin{equation} \mathcal{H}_{\varphi_1}^2\geq0. \end{equation} $ (3.8)

    Indeed, the coefficient of $ \mathcal{H}_{\varphi_1}^2 $ in the formula of $ \mathcal{G}_{\varphi_1} $ is nonzero only if $ \omega_0+ \omega_{-}- \omega_{+} > 0 $. Note also that $ \mbox{supp}(\varphi_1) \cap \mbox{supp}(g(t)) = \{ 1\} $ so we only need to consider $ \omega_{+}, \omega_0, \omega_{-}\in {\mathbb N} $, where the following happens:

    ● If $ \omega_{+} < 1 $ then $ \omega_0 < 1 $ and $ \omega_{-} < 1 $ thus $ \mathcal{H}_{\varphi_1}^2 = 0 $.

    ● If $ \omega_{+} = 1 $, we must have $ \omega_0 = \omega_{-} = 1 $ since $ \omega_0+ \omega_{-}- \omega_{+} > 0 $. In this case $ \mathcal{H}_{\varphi_1}^2 = 0 $.

    ● If $ \omega_{+} > 1 $, there are two options:

    (1) Suppose at least one $ \omega_0 $ or $ \omega_{-} $ is 1. Given that $ \omega_0+ \omega_{-}- \omega_{+} > 0 $, the only option is $ \omega_{-} = 1 $ and $ \omega_0 = \omega_{+} $. In this case $ \mathcal{H}_{\varphi_1}^2 = 0 $.

    (2) If $ \omega_0 $ and $ \omega_{-} $ are not 1, then only $ \varphi_1 (\omega_0+ \omega_{-}- \omega_{+}) $ may be nonzero (if $ \omega_0+ \omega_{-}- \omega_{+} = 1 $) and thus $ \mathcal{H}_{\varphi_1}^2\geq 0 $.

    Therefore, combining the inequality (3.7) with (3.8) and the definition of $ \mathcal{G}_{\cdot} $ (2.2), we get

    $ \begin{equation} F_1'(t) \geq \frac{1}{3}\, \int_{{\mathbb R}_{+}^3} \frac{g_1g_2g_3}{\sqrt{ \omega_0 \omega_{+}}} \mathcal{H}_{\varphi_1}^{1}. \end{equation} $ (3.9)

    We can also restrict integration to the set $ \omega_{+} > 1 $ or we would have $ \mathcal{H}_{\varphi_1}^1 = 0 $. From the definition of $ \varphi_1 $ we see that $ \varphi_1 (\omega_{+}) = 0 $ as well as $ \varphi_1 (\omega_{+} + \omega_0 - \omega_{-}) = 0 $. Consequently,

    $ F_1'(t) \geq \frac{1}{3}\, \int_{\{ \omega_{+} > 1, \, \omega_j \in {\mathbb N}\}} \frac{g_1g_2g_3}{\sqrt{ \omega_0 \omega_{+}}} \, \varphi_1 ( \omega_{+} + \omega_{-} - \omega_0). $

    For $ \varphi_1(\omega_{+}+ \omega_{-}- \omega_0) $ to be nonzero we must also have that $ \omega_0 = \omega_+ > 1 $ and $ \omega_{-} = 1 $. Indeed, if $ \omega_+- \omega_0 > 0 $, since all frequencies are concentrated in $ {\mathbb N} $, we must have $ \omega_+- \omega_0\geq 1 $. Having also that $ \omega_{-}\geq 1 $, we conclude $ \omega_++ \omega_{-}- \omega_0\geq 2 $, which is outside the support of $ \varphi_1 $. Similarly, since $ \omega_0\leq \omega_{+} $ if $ \omega_{-}\geq 2 $ we have $ \omega_{+}+\omega_{-}-\omega_0\geq 2 $. Therefore

    $ \begin{align} F_1'(t) & \geq \frac{1}{3}\, \int_{\{ \omega_{-} = 1,\ \omega_0 = \omega_{+} > 1\}} \frac{g_1g_2g_3}{\sqrt{ \omega_0 \omega_{+}}}, \\ & \geq \frac{1}{3}\, F_1(t)\, \left( \int_{ {\mathbb N} -\{1\} } \frac{1}{\sqrt{ \omega}}\, g(t ) \right)^2 \\& \geq \frac{1}{3}\,F_1(t_0)\, \left( \int_{ {\mathbb N} -\{1\} } \frac{1}{\sqrt{ \omega}}\, g(t) \right)^2. \end{align} $ (3.10)

    In the last inequality we used the fact that $ F_1(t) $ is monotone nondecreasing. We may choose $ t_0 = 0 $ if $ F_1(0) = M_1\neq 0 $, whereas if $ M_1 = 0 $ any $ t_0 > 0 $ would do in view of Lemma 2.2.

    At this stage, we note that the factor $ \omega^{-1/2} $ on the right-hand side of (3.10) was missing in the proof of Theorem 3.2 in [8], and it is not clear why it can be removed. In fact, without it one can simply exploit the conservation of mass to conclude, see [8]. Here we have to be more careful. We exploit both the conservation of the mass and energy and use an interpolation inequality. Namely, by the Hölder inequality

    $ \begin{align} \int_{ {\mathbb N} -\{1\} } g(t )&\leq \bigg( \int_{ {\mathbb N} -\{1\} } \frac{1}{\sqrt{ \omega}}\, g(t ) \bigg)^{\frac23}\bigg( \int_{ {\mathbb N} -\{1\} } \omega\, g(t ) \bigg)^{\frac13}\leq \bigg( \int_{ {\mathbb N} -\{1\} } \frac{1}{\sqrt{ \omega}}\, g(t ) \bigg)^{\frac23}E^\frac13, \end{align} $ (3.11)

    where we used the conservation of the energy in the last inequality. Combining the bound above with (3.10), and using the conserved, normalized mass, we find that

    $ \begin{align*} F_1'(t) & \geq\frac13 \frac{F_1(t_0)}{E}\left( \int_{ {\mathbb N} -\{1\} } \, g(t ) \right)^{3} = \frac{F_1(t_0)}{3E}\left( 1-F_1(t)\right)^{3}. \end{align*} $

    Solving this differential inequality we obtain that

    $ \begin{align} 1-F_1(t)\leq \frac{c_1(t_0)}{\sqrt{b_1(t_0)(t-t_0)+3E}}, \end{align} $ (3.12)

    where $ c_1, b_1 $ are defined in (3.2).

    Proof of Corollary 3.2. To prove the lower bound on $ F_2 $, it is convenient to make use of $ H_2 = F_2^{-1} $, which is well defined thanks to Lemma 2.2. On account of (2.13) we have that

    $ \begin{equation} \frac{d}{dt}H_2 \leq H_2 (U_2 - Q_2)+L_2, \end{equation} $ (3.13)

    where

    $ \begin{align} &U_2 = \frac{2}{\sqrt{2}}\sum\limits_{k = 3}^{\infty}\frac{F_kF_{k+1}}{\sqrt{k(k+1)}}, \qquad Q_2 = \frac{4}{\sqrt{6}}\, F_1 F_{3}+\sum\limits_{k = 3}^{\infty}\frac{F_k^2}{k}, \end{align} $ (3.14)
    $ \begin{align} &L_2 = F_1 +\frac{F_3}{\sqrt{3}}\leq 2. \end{align} $ (3.15)

    Using the Cauchy-Schwarz inequality, we bound $ U_2 $ as

    $ \begin{equation} U_2\leq \sum\limits_{k = 3}^\infty \frac{F_k^2}{k}+\frac12\sum\limits_{k = 4}^\infty \frac{F_k^2}{k}. \end{equation} $ (3.16)

    For $ k\geq 2 $ we know that

    $ \begin{equation} F_k\leq \sum\limits_{j = 2}^\infty F_j = 1-F_1. \end{equation} $ (3.17)

    Since $ F_k > 0 $ for all $ k $ (see Lemma 2.2), (3.1) yields

    $ \begin{equation} U_2-Q_2\leq \frac12\sum\limits_{k = 4}^\infty \frac{F_k^2}{k}\leq \frac{1}{8}(1-F_1)^2\leq \frac{1}{8}\frac{c_1(t_0)^2}{b_1(t_0)(t-t_0)+3E}. \end{equation} $ (3.18)

    Without loss of generality, let us assume that $ t_0 = 0 $ and $ M_1 > 0 $ from now on. We denote $ c_1 (t_0) = c_1 $ and $ b_1(t_0) = b_1 $ for this choice. Then

    $ \begin{equation} \int_{s}^{t} (U_2 (\tau) - Q_2(\tau))\, d\tau \leq \int_{s}^{t} \frac{1}{8}\frac{c_1^2}{b_1 \tau+3E} \, d\tau \leq \frac{c_1^2}{8b_1} \log \left( \frac{b_1 t+3E}{b_1 s + 3E} \right). \end{equation} $ (3.19)

    We define $ \alpha = c_1^2/(8b_1) $ as announced in (3.4). Then

    $ \begin{equation} \exp\left( \int_{s}^{t} (U_2 (\tau) - Q_2(\tau))\, d\tau\right) \leq \left( \frac{b_1 t+3E}{b_1 s + 3E} \right)^{\alpha} . \end{equation} $ (3.20)

    Integrating (3.13) yields

    $ H_2(t)\leq e^{\int_{0}^t (U_2 (s) - Q_2(s)) \, ds} H_2(0)+\int_{0}^t e^{\int_{s}^t(U_2 (\tau) - Q_2(\tau))\, d\tau}L_2(s)\, ds. $

    If $ \alpha\neq 1 $, combining (3.20) with (3.15), we estimate the second term as follows:

    $ \begin{align*} \int_{0}^t e^{\int_{s}^t(U_2 (\tau) - Q_2(\tau))\, d\tau}L_2(s)\, ds &\leq 2\,\int_{0}^{t} \left(\frac{t+3E/b_1}{s+3E/b_1}\right)^{\alpha} \, ds \\ & = \frac{2}{\alpha-1} \, \left[ (0+3E/b_1) \, \left(\frac{t+3E/b_1}{0+3E/b_1}\right)^{\alpha}-(t+3E/b_1)\right]. \end{align*} $

    Putting these estimates together, we obtain

    $ H_2(t)\leq \frac{2}{\alpha-1} \, \left[ 3E/b_1 \, \left(\frac{t+3E/b_1}{3E/b_1}\right)^{\alpha}-(t+3E/b_1)\right] +\left(\frac{t+3E/b_1}{3E/b_1}\right)^{\alpha} \, H_2 (0). $

    When $ \alpha = 1 $ one would have a logarithmic correction instead of a power law in the bound above. Since $ H_2(t) = F_2^{-1}(t) $, we deduce the following:

    ● When $ \alpha > 1 $, we conclude that

    $ F_2 (t)\geq c_2((t+3E/b_1)^{\alpha}+C_2)^{-1}. $

    ● If $ \alpha = 1 $ we have

    $ F_2(t)\gtrsim c_2((t+3E/b_1)\log(t+3E/b_1)+C_2)^{-1}. $

    ● When $ \alpha < 1 $, the term in $ t $ is dominant and therefore we have that

    $ F_2 (t)\gtrsim c_2((t+3E/b_1)+C_2)^{-1}. $

    This concludes the proof of the corollary.

    Given the result in Theorem 1.1, it is natural to ask what could be the maximal speed of convergence of $ F_n $ with $ n > 1 $. If the lower bound for $ 1-F_1 $ in (1.7) were sharp, then we can derive sharp lower and upper bounds for the speed of convergence of all the functions $ F_n $ for $ n $ large enough. This is the content of the following conditional result.

    Proposition 3.4. Under the same assumptions as in Theorem 1.1, suppose that the following inequality were true:

    $ \begin{equation} 1-F_1(t)\leq \frac{c}{t-t_0+C}, \end{equation} $ (3.21)

    for some constants $ c, C > 0 $ and $ t > t_0 $. Then for any $ n $ large enough such that

    $ \begin{equation} \gamma_n: = \frac{2c}{\sqrt{n}}\left(\frac{1}{\sqrt{n+2}}+\mathbb{1}_{\{n > 2\}}\frac{1}{\sqrt{n+1}}+\mathbb{1}_{\{n > 2\}}\frac{2}{\sqrt{2}}\right) < 1, \end{equation} $ (3.22)

    the following inequality holds for all $ t > t_0 $

    $ \begin{equation} \frac{c_n}{t-t_0+C}\leq F_n(t)\leq \frac{c}{t-t_0+C}, \end{equation} $ (3.23)

    where $ c_n $ can be explicitly computed.

    Proof. Recall the definitions of $ L_n, U_n $ in (2.14)–(2.16). Thanks to the bound on the total mass, we deduce that $ L_n\leq 4/n $. Therefore, from (2.12) we get

    $ \begin{equation} \frac{d}{d t}F_n\geq-U_nF_n-\frac{4}{n}F_n^2. \end{equation} $ (3.24)

    Regarding $ U_n $, exploiting the conservation of the mass, we have

    $ \begin{equation} \sum\limits_{k = n+1}^{\infty}\sum\limits_{m = k+1}^{n+k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}\leq\frac{1}{\sqrt{n+2}} \sum\limits_{k = n+1}^{\infty}F_k\sum\limits_{m = n+2}^{\infty}F_m\leq\frac{1}{\sqrt{n+2}}(1-F_1)^2. \end{equation} $ (3.25)

    If $ n > 2 $, we have additional terms in $ U_n $, which we bound as follows:

    $ \begin{equation} \sum\limits_{k = 2}^{n-1}\sum\limits_{m = n+1}^{n+k}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}\leq\frac{1}{\sqrt{n+1}} \sum\limits_{k = 2}^{n-1}F_k\sum\limits_{m = n+1}^{n+k}F_m\leq\frac{1}{\sqrt{n+1}}(1-F_1)^2, \end{equation} $ (3.26)

    and

    $ \begin{equation} 2\sum\limits_{k = 2}^{n-1}\sum\limits_{m = 1}^{k-1}\frac{F_k F_m}{\sqrt{k}}\leq\frac{2}{\sqrt{2}}\left(F_1\sum\limits_{k = 2}^{n-1}F_k+\sum\limits_{k = 2}^{n-1}F_k \sum\limits_{m = 2}^{k-1}F_m\right)\leq \frac{2}{\sqrt{2}}(1-F_1)(F_1+(1-F_1)). \end{equation} $ (3.27)

    The upper bound above is the worst in terms of decay in time since it contains a factor $ F_1 $. Therefore, since $ (1-F_1)\leq1 $, combining (3.25)–(3.27) we obtain

    $ \begin{equation} U_n\leq \tilde{\gamma}_n(1-F_1), \qquad \tilde{\gamma}_n = \frac{2}{\sqrt{n}}\left(\frac{1}{\sqrt{n+2}}+\chi_{n > 2}\frac{1}{\sqrt{n+1}}+\chi_{n > 2}\frac{2}{\sqrt{2}}\right). \end{equation} $ (3.28)

    For simplicity of notation, consider now $ t_0 = 0 $ in (3.21). Combining (3.24) with (3.28) we obtain

    $ \begin{equation} \frac{d}{d t}F_n\geq -\frac{\gamma_n}{t+C}F_n-\frac{4}{n}F_n^2, \end{equation} $ (3.29)

    where $ \gamma_n = \tilde{\gamma}_nc $ was given in (3.22). Defining $ G_n: = (t+C)^{\gamma_n}F_n $, we find

    $ \begin{equation} \frac{d}{dt}G_n\geq-\frac{4}{n(t+C)^{\gamma_n}}G_n^2. \end{equation} $ (3.30)

    Since $ \gamma_n < 1 $ by hypothesis, by a comparison principle we get

    $ \begin{equation} \frac{1}{G_n (0)} - \frac{1}{G_n (t)} \geq -\frac{4}{n (1-\gamma_n)} \, \left((t+C)^{1-\gamma_n}-C^{1-\gamma_n}\right), \end{equation} $ (3.31)

    which immediately implies

    $ \begin{equation} F_n(t)\geq \frac{1}{ \frac{4}{n(1-\gamma_n)} \, \left((t+C) -C^{1-\gamma_n} (t+C)^{\gamma_n}\right)\, +(CF_n (0))^{-1}(t+C)^{\gamma_n} }, \end{equation} $ (3.32)

    whence proving Proposition 3.4, where $ c_n $ can be computed from the inequality above.

    In this section we discuss the toy model announced in the introduction (1.8). Our main goal is to understand if there could be solutions exhibiting a decay of order $ \mathcal{O}(t^{-1}) $, which could be possible for particular initial data and would justify the optimality of the lower bound in Theorem 1.1. Investigating this question directly on the full system (2.12) seems hard. For example, a naive ansatz imposing a polynomial decay for $ F_n $ is not consistent with the equations, and this is because $ F_1 $ cannot decay and it behaves different with respect to all the other $ F_n $. Therefore, we first aim at reducing the complexity of the system by performing the following reductions:

    ● By (2.12), for $ n\geq 2 $, $ dF_n/dt $ is a weighted sum of products of the form $ F_i F_j F_k $. We drop all terms where $ \{i, j, k\}\cap \{1\} = \emptyset $.

    ● All the remaining terms have at most one factor of $ F_1 $. In view of Proposition 3.1, we substitute such terms by $ 1 $.

    The resulting toy model may be written as

    $ \begin{equation} \frac{d}{dt}F_n = 4\, \frac{F_n}{\sqrt{n}} \, \frac{F_{2n-1}}{\sqrt{2n-1}} - 4\, \frac{F_n}{\sqrt{n}}\, \sum\limits_{k = 2}^{n-1} \frac{F_k}{\sqrt{k}}- 2 \left(\frac{F_n}{\sqrt{n}}\right)^2+ 2\sum\limits_{k = n+1}^{\infty} \frac{F_kF_{k+1-n}}{\sqrt{k(k+1-n)}}. \end{equation} $ (4.1)

    The idea behind our approximating toy model is that the leading order terms are dictated by interactions with $ F_1 $. Indeed, we know that all the mass is converging towards $ F_1 $ and therefore interactions between $ F_j $ with $ j\neq 1 $ are lower order. In this toy model though, many nice properties such as positivity of the solution cannot be expected to hold anymore. On the other hand, the advantage of the Eq (4.1) is that the right-hand side is quadratic. We thus propose the natural self-similar ansatz of the form

    $ F_n(t) = \beta_n\frac{\sqrt{n}}{t}, \qquad \text{for } 2\leq n\in {\mathbb N}. $

    We plug this ansatz into (4.1) and derive equations for the coefficients $ (\beta_n)_{n\geq 2} $

    $ \begin{equation} -\sqrt{n}\, \beta_n = 4\, \beta_n \, \beta_{2n-1} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1} \beta_k- 2 \beta_n^2+ 2\sum\limits_{k = n+1}^{\infty} \beta_k \beta_{k+1-n}. \end{equation} $ (4.2)

    Our goal is to investigate wheter or not there are positive solutions to this toy model, always with the idea in mind of having something consistent with the behavior observed in the full WKE, especially regarding Lemmas 2.2 and 2.3 about the positivity of the coefficients $ \beta_n $. We do not focus too much on the mass and energy properties since one can suitably rescale the time in the self-similar ansatz to adjust the parameters.

    For the sake of understanding this toy model, we would like to introduce a suitable truncation and exhibit positive solutions to truncated system. The aim would be to use such solutions as the starting point of a perturbative argument in the full WKE. First of all, we observe the following:

    Remark 4.1. Consider a solution of the form $ \beta_n = 0 $ for all $ n\geq N $. In the case $ N = 4 $, it is straightforward to check that $ \beta_2, \beta_3 $ must be given by

    $ \beta_2 = \frac{\sqrt{2}+3\sqrt{3}}{14} > 0, \qquad \beta_3 = \frac{-2\sqrt{2}+\sqrt{3}}{14} < 0, $

    after imposing $ \beta_2, \beta_3\neq 0 $. Similarly, if $ N = 5 $ the only real-valued solutions have $ \beta_4 < 0 $. For $ N = 6 $, one can numerically compute the four exact (nonzero) real-valued solutions to the system, but none of them lies in $ (0, \infty)^4 $.

    These examples suggest that if we truncate brutally all the $ n\geq N $, there is no guarantee of finding strictly positive solutions to our system, which is clearly not consistent with Lemma 2.2.

    To overcome the issues related to a standard truncation as explained in Remark 4.1, we allow the presence of large gap between high and low frequencies. This is helpful because we are able to "force" a positive solution in the low frequencies by using very high frequencies as given parameters of the chosen cut-off. Essentially, we are exploiting the highly nonlocal nature of the system (4.2) and this can be heuristically motivated by the presence of a high-to-low frequency cascade. More specifically, we consider the truncated system for $ \mathit{\boldsymbol{\beta}}: = (\beta_2, \ldots, \beta_{N}) $ by setting

    $ \begin{equation} \beta_k = 0 \quad \mbox{for all}\ N+1 \leq k < 2N,\ \mbox{and}\ k\geq 3N-1. \end{equation} $ (4.3)

    We keep $ N-1 $ functions $ \mathit{\boldsymbol{\lambda}}: = (\beta_{2N}, \beta_{2N+1}, \ldots, \beta_{3N-2}) $ as parameters which are a priori fixed. With this in mind, (4.2) reads

    $ \begin{equation} -\sqrt{n}\, \beta_n = 4\, \beta_n \, \beta_{2n-1} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1}\beta_k - 2 \beta_n^2 + 2\sum\limits_{k = n+1}^{3N-2} \beta_k \beta_{k+1-n}, \qquad \mbox{for}\ n = 2,\ldots, N. \end{equation} $ (4.4)

    Exploiting (4.3), we may rewrite this as:

    $ \begin{equation} \begin{split} -\sqrt{n}\, \beta_n = &\ 4\, \beta_n \, \beta_{2n-1} \mathbb{1}_{\{2n-1\leq N\}} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1} \beta_k- 2 \beta_n^2\\ & + 2\sum\limits_{k = n+1}^{N} \beta_k \beta_{k+1-n} + 2\sum\limits_{k = 2N+1}^{3N-2} \beta_k \beta_{k+1-n} , \qquad \mbox{for}\ n = 2,\ldots, N. \end{split} \end{equation} $ (4.5)

    For this system, we have a special solution given by

    $ \begin{align} \mathit{\boldsymbol{\beta}}^0 = ( \gamma, 0, \ldots ,0), \qquad \mathit{\boldsymbol{\lambda}}^0 = (\lambda_1^0 , \lambda_2^0, 0, \ldots,0), \end{align} $ (4.6)

    where

    $ \begin{equation} \gamma = \frac{\sqrt{2} + \sqrt{2+ 16\lambda_1^{0}\lambda_2^{0} }}{4}. \end{equation} $ (4.7)

    Around this particular solution, we are able to show the existence of many solutions $ \mathit{\boldsymbol{\beta}} $ to (4.5) with strictly positive components. Therefore, we have many solutions for the system (4.2) when truncated at $ N $ except for the coefficients $ 2N, 2N+1 $. This is the content of Theorem 1.4 which we restate more precisely as follows:

    Theorem 4.2. Fix any $ N\in{\mathbb N} $ with $ N\geq 4 $. Fix $ \lambda_1^0, \lambda_2^0 > 0 $ such that

    $ \begin{equation} \lambda^0_1\lambda^0_2 > N/4. \end{equation} $ (4.8)

    Then there exists some $ \delta_0 (N, \lambda_1^0, \lambda_2^0) > 0 $ such that for all $ \delta < \delta_0 $, there exists a solution to (4.5) (which solves (4.2) for $ n\leq N $) such that $ \beta_2, \ldots, \beta_N, \beta_{2N}, \beta_{3N-2} > 0 $, and $ \beta_k = 0 $ for the rest of $ k\in{\mathbb N}\cap [2, \infty) $. Moreover, this solution satisfies

    $ \begin{equation} \begin{split} \beta_2 & = \gamma + {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda^0_1,\lambda^0_2\}),\\ \beta_j & = {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda^0_1,\lambda^0_2\}), \qquad j = 3,\ldots,N,\\ \beta_{2N} & = \lambda^0_1 + {\mathcal O} (\delta),\\ \beta_{2N+1} & = \lambda^0_2 + {\mathcal O} (\delta),\\ \beta_{j} & = {\mathcal O}(\delta), \qquad j = 2N+2,\ldots, 3N-2. \end{split} \end{equation} $ (4.9)

    Proof. The idea of the proof is to construct the positive solutions by using the implicit function theorem for a map whose zeros are solutions to (4.4). We have to carefully set the parameters in the special solution (4.6) in order to guarantee the positivity of the new solution. The proof is divided into four steps.

    Step 1. Consider the map

    $ \begin{split} f: {\mathbb R}^{N-1}\times & {\mathbb R}^{N-1} \longrightarrow {\mathbb R}^{N-1}, \\ f(\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) & = (f_n (\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) )_{n = 2}^{N} \end{split} $

    where

    $ \begin{equation} \begin{split} f_n (\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) = & \ 2 \beta_n^2 + 4\, \beta_n \, \sum\limits_{k = 2}^{n-1} \beta_k -4\, \beta_n \, \beta_{2n-1} \mathbb{1}_{\{2n-1\leq N\}}\\ & - 2\sum\limits_{k = n+1}^{N} \beta_k \beta_{k+1-n} -2 \sum\limits_{k = 2N+1}^{3N-2} \beta_k \beta_{k+1-n} - \sqrt{n} \beta_n \end{split} \end{equation} $ (4.10)

    and

    $ \mathit{\boldsymbol{\beta}} = ( \beta_2,\ldots,\beta_N), \qquad \mathit{\boldsymbol{\lambda}} = (\lambda_1 , \ldots, \lambda_{N-1}) = ( \beta_{2N},\ldots,\beta_{3N-2}). $

    Notice that $ f_n(\mathit{\boldsymbol{\beta}}, \mathit{\boldsymbol{\lambda}}) = 0 $ for all $ 2\leq n\leq N $ corresponds to solution to (4.4). We thus consider the special point:

    $ \mathit{\boldsymbol{\beta}}^{0} = ( \gamma, 0, \ldots ,0), \qquad \mathit{\boldsymbol{\lambda}}^{0} = (\lambda_1^{0} , \lambda_2^{0}, 0, \ldots,0), $

    where $ \gamma $ is defined in (4.7). We know that $ f (\mathit{\boldsymbol{\beta}}^{0}, \mathit{\boldsymbol{\lambda}}^{0}) = 0 $. Moreover, the Jacobian matrix at this point is non-singular. More precisely,

    $ \begin{equation} J_{\beta} f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0} ) = \mbox{diag}\left( 4\gamma - \sqrt{n}\right)_{2\leq n\leq N} + \left( -2 \gamma\, \delta_{(i,j) = (i,i+1)}\right)_{1\leq i,j\leq N-1}. \end{equation} $ (4.11)

    Notice that this is an upper triangular matrix, meaning that it is invertible provided

    $ 4\gamma - \sqrt{n} \neq 0, \qquad \mbox{for all}\ n = 2,\ldots, N-1. $

    For reasons that will be clear later, we impose that

    $ \begin{equation} \gamma > \frac14\sqrt{N}, \end{equation} $ (4.12)

    which implies that every diagonal entry in (4.11) is strictly positive.

    By the implicit function theorem, there exist $ \varepsilon, \delta > 0 $ such that $ \mathit{\boldsymbol{\beta}} $ can be written as a smooth function of $ \mathit{\boldsymbol{\lambda}} $ in small neighborhoods of our zero, i.e.,

    $ \begin{split} \mathit{\boldsymbol{\beta}}: B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) & \longrightarrow B(\mathit{\boldsymbol{\beta}}^{0},\varepsilon),\\ \mathit{\boldsymbol{\lambda}} & \longmapsto \mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}}) \end{split} $

    and such that $ f(\mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}}), \mathit{\boldsymbol{\lambda}}) = 0 $ for all $ \lambda \in B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) $. Moreover, we know that

    $ \begin{equation} J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) = - J_{\mathit{\boldsymbol{\beta}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0})^{-1} \, \cdot\, J_{\mathit{\boldsymbol{\lambda}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0}). \end{equation} $ (4.13)

    Step 2. We now compute the Jacobian matrix $ J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) $. Set

    $ \begin{equation} c_n = \frac{2\gamma}{4\gamma-\sqrt{n}}, \qquad n = 2,\ldots, N-1, \end{equation} $ (4.14)

    and let

    $ A: = \begin{pmatrix} 1 & c_2 & c_2 c_3 & c_2 c_3 c_4 & \ldots & \prod\limits_{j = 2}^{N-1} c_j \\ 0 & 1 & c_3 & c_3 c_4 & \ldots & \prod\limits_{j = 3}^{n-1} c_j\\ 0 & 0 & 1 & c_4 & \ldots & \prod\limits_{j = 4}^{N-1} c_j\\ \vdots & & \ddots & \ddots & \ddots & \vdots\\ \vdots & & & \ddots & \ddots & c_{N-1}\\ 0 & & \ldots & & 0 & 1 \end{pmatrix}. $

    Then, it is easy to check that

    $ \begin{equation} J_{\mathit{\boldsymbol{\beta}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0})^{-1} = A \cdot \mbox{diag}\left( \frac{1}{4\gamma - \sqrt{n}}\right)_{2\leq n \leq N}, \end{equation} $ (4.15)

    which is an upper triangular matrix with strictly positive entries in the upper triangle, thanks to (4.12) and (4.14). Similarly, we may compute

    $ \begin{equation} J_{\mathit{\boldsymbol{\lambda}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0}) = - 2\begin{pmatrix} \lambda_1^{0} & \lambda_2^{0} & 0 & 0 &0 & \ldots & 0 \\ 0 & \lambda_1^{0} & \lambda_2^{0} & 0 & 0 &\ldots & 0 \\ 0 & 0 & \lambda_1^{0} & \lambda_2^{0} & 0 & \ldots & 0 \\ \vdots & & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & \ddots & \ddots & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \\ 0 & \cdots & & \cdots & & & \lambda_1^{0} \end{pmatrix}. \end{equation} $ (4.16)

    Therefore, using (4.13) it is not hard to deduce that $ J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) $ is an upper triangular matrix with strictly positive entries on the upper triangle.

    Step 3. We are finally in the position of constructing the solution $ \mathit{\boldsymbol{\beta}} $ with all positive entries. By the Fundamental Theorem of Calculus, we have that

    $ \begin{equation} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}) = \mathit{\boldsymbol{\beta}}^{0} + \int_0^1 J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0}))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})\, dt \end{equation} $ (4.17)

    where $ J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} $ is the Jacobian matrix of $ \mathit{\boldsymbol{\beta}} $ with respect to $ \mathit{\boldsymbol{\lambda}} $.

    Let us choose $ \mathit{\boldsymbol{\lambda}}\in B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) $ such that $ \mathit{\boldsymbol{\lambda}}- \mathit{\boldsymbol{\lambda}}^{0}\in (0, \infty)^{N-1} $. Since the entries of the matrix $ J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) $ are strictly positive in the upper triangle, we have that

    $ [J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0}))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})]_j \Big |_{t = 0} > 0 , \qquad \mbox{for all} \quad j = 1,\ldots, N-1. $

    By continuity of the Jacobian matrix, we can extend this positivity to any $ t\in [0, 1] $ as long as $ \mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0} $ is small enough (i.e., by potentially making $ \delta > 0 $ smaller). By (4.17), this implies that

    $ [\mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}) ]_j > 0, \qquad \mbox{for all} \quad j = 1,\ldots, N-1. $

    Step 4. Let us further impose

    $ \begin{equation} \lambda_1^0 \lambda_2^0 > \frac14 N \end{equation} $ (4.18)

    in order to guarantee that $ \gamma > \sqrt{N}/2 $. This implies that $ c_n $ in (4.14) satisfies $ c_n\leq 1 $ and therefore the entries of the matrix in (4.15) are of size $ 1/\sqrt{N} $. Hence, the entries of the matrix $ J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) $, see (4.13), have size $ \|{\boldsymbol{\lambda}^{0}}\|_{\infty}/\sqrt{N} $. Therefore, by summing up at most $ N $-terms of size $ \|{\boldsymbol{\lambda}^{0}}\|_{\infty}/\sqrt{N} $, we infer

    $ \left\lVert {J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} ))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})} \right\rVert_{\infty}\lesssim \sqrt{N} \, \max\{\lambda_1^{0},\lambda_2^{0}\}\, \delta. $

    By the continuity of the Jacobian matrix, one can arrange:

    $ \left\lVert {J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})) ))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})} \right\rVert_{\infty}\lesssim \sqrt{N} \, \max\{\lambda_1^{0},\lambda_2^{0}\}\, \delta, \qquad \forall t\in [0,1], $

    by further reducing $ \delta $ if necessary. Therefore, in view of (4.17), we choose $ \delta $ small enough so that

    $ \begin{equation} \begin{split} \mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}})_1 & = \beta_2 (\mathit{\boldsymbol{\lambda}}) = \gamma + {\mathcal O} ( \sqrt{N}\left\lVert {\mathit{\boldsymbol{\lambda}}^0} \right\rVert_{\infty}\, \delta),\\ \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}})_j & = {\mathcal O} ( \sqrt{N}\left\lVert {\mathit{\boldsymbol{\lambda}}^0} \right\rVert_{\infty}\, \delta),\qquad j = 2,\ldots,N-1,\\ \mathit{\boldsymbol{\lambda}}_1 & = \beta_{2N} = \lambda_1^0 + {\mathcal O} (\delta),\\ \mathit{\boldsymbol{\lambda}}_2 & = \beta_{2N+1} = \lambda_2^0 + {\mathcal O} (\delta),\\ \mathit{\boldsymbol{\lambda}}_j & = \beta_{2N+j-1} = {\mathcal O} (\delta), \qquad j = 3,\ldots, N-1, \end{split} \end{equation} $ (4.19)

    where $ \gamma $ is defined in (4.7) and we impose (4.18). This concludes the proof.

    We are going to derive the equations of $ F_n $ by the weak formulation (1.3). As test functions, we choose

    $ \begin{equation} \varphi_n( \omega) = \chi_{(n-1/2,n+1/2)}, \qquad \text{for } n\in \mathbb{N}\setminus \{0\}, \end{equation} $ (5.1)

    where the $ \chi $ are $ C_c^{\infty}({\mathbb R}) $ functions supported inside intervals of the form $ (n-1/2, n+1/2) $ and such that $ \varphi_n (n) = 1 $.

    From the definition of $ F_n $, see (1.6), and (1.3) we have

    $ \begin{equation} \begin{split} \partial_t F_n& = \int_{{\mathbb R}^3_+}\Phi \frac{g_1g_2g_3}{\sqrt{\omega_1\omega_2\omega_3}}[\varphi_{n,4}+\varphi_{n,3}-\varphi_{n,1}-\varphi_{n,2}]d \omega_1d \omega_2d \omega_3 = :\mathcal{I}[g,\varphi_n],\\ \omega_4& = \omega_1+ \omega_2- \omega_3. \end{split} \end{equation} $ (5.2)

    Notice that we always have $ \omega_3\neq \omega_2 $ since otherwise also $ \omega_4 = \omega_1 $ and the integrand above vanishes. Analogously, we have $ \omega_4\neq \omega_1 $.

    We have to distinguish several cases depending on the values of $ \varphi_{n, 1} $.

    $ \bullet $ Case $ \varphi_{n, 1} = \varphi_{n, 2} = 1 $.

    First notice that we have $ \varphi_{n, 3} = \varphi_{n, 4} = 0 $. Indeed, if $ \varphi_{n, 3} = 1 $, this implies $ \omega_4 = \omega_1+\omega_2- \omega_3 = n $, meaning that $ \varphi_{n, 4} = 1 $. But if $ \varphi_{n, i} = 1 $ for $ i = 1, \dots, 4 $ the integrand is zero. Therefore we have $ \varphi_{n, 3} = \varphi_{n, 4} = 0 $.

    When $ \omega_3 < n $ then $ \Phi = \sqrt{ \omega_3} $, hence

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = \varphi_{n,2} = 1\}\cap \{ \omega_3 < n\}}] = -\frac{2}{n}F_n^2\sum\limits_{k = 1}^{n-1}F_k. \end{equation} $ (5.3)

    For $ \omega_3 > n $ we have $ \Phi = \sqrt{ \omega_4} = \sqrt{2n- \omega_3} $, therefore

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = \varphi_{n,2} = 1\}\cap \{ \omega_3 > n\}}] = -\frac{2}{n}F_n^2\sum\limits_{k = n+1}^{2n-1}\frac{F_k\sqrt{2n-k}}{\sqrt{k}}. \end{equation} $ (5.4)

    $ \bullet $ Cases $ \varphi_{n, 1} = 1, \ \varphi_{n, 2} = 0 $ or $ \varphi_{n, 1} = 0, \ \varphi_{n, 2} = 1 $.

    In view of the symmetry of the integrals, the two cases under consideration are equal. If $ \varphi_{n, 3} = 1 $, then $ \omega_4 = \omega_2 $ meaning that $ \varphi_{n, 4} = 0 $. But if $ \varphi_{n, 1} = \varphi_{n, 3} = 1 $ and $ \varphi_{n, 2} = \varphi_{n, 4} = 0 $ then the integrand is zero. Analogously when $ \varphi_{n, 4} = 1 $. Hence we only have to consider $ \varphi_{n, 3} = \varphi_{n, 4} = 0 $.

    We use the following convention for the indices in this case

    $ k = \omega_2 \text{ and } m = \omega_3. $

    We start with the case $ \omega_2 < n $. When $ \omega_3 > n $ then $ \Phi = \sqrt{n+k-m} $ and

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_2 < n, \omega_3 > n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = 1}^{n-1}\sum\limits_{m = n+1}^{n+k}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}. \end{equation} $ (5.5)

    When $ \omega_3 < \omega_2 $ one has $ \Phi = \sqrt{m} $ meaning that

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_3 < \omega_2 < n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = 2}^{n-1}\sum\limits_{m = 1}^{k-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} $ (5.6)

    If $ \omega_2 < \omega_3 < n $ then $ \Phi = \sqrt{k} $ so that

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_2 < \omega_3 < n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{m = 2}^{n-1}\sum\limits_{k = 1}^{m-1}\frac{F_kF_m}{\sqrt{m}}. \end{equation} $ (5.7)

    When $ \omega_2 > n $, first consider $ \omega_3 < \omega_2 $. If $ \omega_3 < n $ then $ \Phi = \sqrt{m} $ and

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_3 < n < \omega_2\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} $ (5.8)

    For $ n < \omega_3 < \omega_2 $ one has $ \Phi = \sqrt{n} $ hence we get

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{n < \omega_3 < \omega_2\}}] = -F_n\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_m}{\sqrt{km}}. \end{equation} $ (5.9)

    For $ \omega_3 > \omega_2 > n $ then $ \Phi = \sqrt{n+k-m} $ and

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{n < \omega_2 < \omega_3\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = k+1}^{n+k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}. \end{equation} $ (5.10)

    This concludes all the possibles cases for $ \varphi_{n, 1} = 1, \varphi_{n, 2} = 0 $. In account of the symmetry $ \omega_1\leftrightarrow \omega_2 $, we remark again that all the terms appearing here are multiplied by a factor $ 2 $ in (2.12).

    $ \bullet $ Case $ \varphi_{n, 3} = \varphi_{n, 4} = 1. $

    In this case we also have $ \varphi_{n, 1} = \varphi_{n, 2} = 0 $ since otherwise the integrand is zero. The indices convention in this case are

    $ k = \omega_2 \text{ and } m = \omega_1. $

    We also have $ \omega_1+ \omega_2 = 2n $, hence, if $ \omega_1 < \omega_2 $ then $ \omega_2 > n $ and $ \Phi = \sqrt{m} $. Since we can always exchange $ \omega_1 $ and $ \omega_2 $ we conclude that

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = \varphi_{n,4} = 1\}}] = \frac{4}{\sqrt{n}}F_n\sum\limits_{m = 1}^{n-1}\frac{F_mF_{2n-m}}{\sqrt{2n-m}}. \end{equation} $ (5.11)

    $ \bullet $ Case $ \varphi_{n, 3} = 1, \varphi_{n, 4} = 0. $

    In this case we know that $ \varphi_{n, 1} = \varphi_{n, 2} = 0 $ since otherwise the integrand is zero. We again denote

    $ k = \omega_2 \text{ and } m = \omega_1. $

    First we consider $ \omega_1 < \omega_2 $. In account of the symmetries, the case $ \omega_2 < \omega_1 $ will be the same. If $ \omega_2 < n $ then $ \Phi = \sqrt{k+m-n} $ and

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 < \omega_2 < n\}}] = \frac{1}{\sqrt{n}}F_n\sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\sum\limits_{m = n+1-k}^{k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{k+m-n}. \end{equation} $ (5.12)

    For $ \omega_1 < n < \omega_2 $ one has $ \Phi = \sqrt{m} $ hence

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 < n < \omega_2\}}] = \frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} $ (5.13)

    When $ n < \omega_1 < \omega_2 $ then $ \Phi = \sqrt{n} $, from which we get

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{n < \omega_1 < \omega_2\}}] = F_n\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_m}{\sqrt{km}}. \end{equation} $ (5.14)

    The three terms above are multiplied by a factor $ 2 $ in (2.12) in view of the symmetry $ \omega_1 \leftrightarrow \omega_2 $.

    We are left only with the case $ \omega_1 = \omega_2 $. If $ \omega_2 > n $ then

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 = \omega_2 > n\}}] = F_n\sum\limits_{k = n+1}^{\infty}\frac{F_k^2}{k}. \end{equation} $ (5.15)

    When $ \omega_2 < n $ one has

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 = \omega_2 < n\}}] = \frac{F_n}{\sqrt{n}}\sum\limits_{k = \lceil \frac{n}{2}\rceil }^{n-1}\frac{F_k^2}{k}\sqrt{2k-n}. \end{equation} $ (5.16)

    $ \bullet $ Case $ \varphi_{n, 4} = 1, \varphi_{n, 3} = 0. $

    In this case one has $ \varphi_{n, 1} = \varphi_{n, 2} = 0 $. We assume $ \omega_1 < \omega_2 $ as done previously. We again denote $ k = \omega_2 \text{ and } m = \omega_1. $

    For $ \omega_1 < \omega_2 < n $ then $ \Phi = \sqrt{k+m-n} $ so that

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 < \omega_2 < n\}}] = \sum\limits_{k = 2}^{n-1}\sum\limits_{m = 2}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{kn}}. \end{equation} $ (5.17)

    For $ \omega_1 < n < \omega_2 $ one has $ \Phi = \sqrt{m} $ hence

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 < n < \omega_2\}}] = \sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{k(k+m-n)}}. \end{equation} $ (5.18)

    When $ n < \omega_1 < \omega_2 $ then $ \Phi = \sqrt{n} $, from which we get

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{n < \omega_1 < \omega_2\}}] = \sqrt{n}\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{km(k+m-n)}}. \end{equation} $ (5.19)

    The three terms above are multiplied by a factor $ 2 $ in (2.12) in view of the symmetry $ \omega_1 \leftrightarrow \omega_2 $.

    We are left only with the case $ \omega_1 = \omega_2 $. If $ \omega_2 > n $ then

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 = \omega_2 > n\}}] = \sqrt{n}\sum\limits_{k = n+1}^{\infty}\frac{F_k^2F_{2k-n}}{k\sqrt{2k-n}}. \end{equation} $ (5.20)

    When $ \omega_2 < n $ we get

    $ \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 = \omega_2 < n\}}] = \sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\frac{F_k^2F_{2k-n}}{k}. \end{equation} $ (5.21)

    Therefore, from (5.3)–(5.21) the identity (2.12) follows: Notice the crucial cancellations of (5.9) with (5.14) and (5.8) with (5.13). The proof of (2.13) immediately follows by the fact that $ \partial_t H_n = -F_n^{-2} \partial_tF_n $.

    In this paper, we consider solutions to the Wave Kinetic Equation with initial data given by a countable sum of delta functions, whose dynamics are discrete for all times. We derive a system of equations that describe this dynamics and carry out a quantitative study of their convergence to a single delta function. In particular, we prove upper and lower bounds for the rate of convergence. In order to study the optimality of these bounds, we introduce and analyze a toy model which captures the leading order quadratic interactions. Finally, we show the existence of a family of non-negative solutions to a truncation of this toy model.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are thankful to Juan J. L. Velázquez for several helpful discussions and comments. The research of MD was supported by the Swiss State Secretariat for Education, Research and lnnovation (SERI) under contract number MB22.00034 through the project TENSE. The research of both authors was backed by the GNAMPA-INdAM.

    The authors declare no conflicts of interest.



    [1] T. Buckmaster, P. Germain, Z. Hani, J. Shatah, Onset of the wave turbulence description of the longtime behavior of the nonlinear Schrödinger equation, Invent. Math., 225 (2021), 787–855. http://dx.doi.org/10.1007/s00222-021-01039-z doi: 10.1007/s00222-021-01039-z
    [2] C. Collot, P. Germain, On the derivation of the homogeneous kinetic wave equation, unpublished work.
    [3] Y. Deng, Z. Hani, On the derivation of the Wave Kinetic Equation for NLS, Forum Math., Pi, 9 (2021), e6. http://dx.doi.org/10.1017/fmp.2021.6 doi: 10.1017/fmp.2021.6
    [4] Y. Deng, Z. Hani, Propagation of chaos and the higher order statistics in the wave kinetic theory, J. Eur. Math. Soc., 2024. http://dx.doi.org/10.4171/jems/1488
    [5] Y. Deng, Z. Hani, Derivation of the Wave Kinetic Equation: full range of scaling laws, arXiv, 2023. http://dx.doi.org/10.48550/arXiv.2301.07063
    [6] Y. Deng, Z. Hani, Full derivation of the Wave Kinetic Equation, Invent. math., 233 (2023), 543–724. http://dx.doi.org/10.1007/s00222-023-01189-2 doi: 10.1007/s00222-023-01189-2
    [7] Y. Deng, Z. Hani, Long time justification of wave turbulence theory, arXiv, 2023. http://dx.doi.org/10.48550/arXiv.2311.10082
    [8] M. Escobedo, J. J. L. Velázquez, On the theory of weak turbulence for the nonlinear Schrödinger equation, Mem. Amer. Math. Soc., 238 (2015), 1124. http://dx.doi.org/10.1090/memo/1124 doi: 10.1090/memo/1124
    [9] P. Germain, A. Ionescu, M. B. Tran, Optimal local well-posedness theory for the kinetic wave equation, J. Funct. Anal., 279 (2020), 108570. http://dx.doi.org/10.1016/j.jfa.2020.108570 doi: 10.1016/j.jfa.2020.108570
    [10] A. Hannani, M. Rosenzweig, G. Staffilani, M. B. Tran, On the wave turbulence theory for a stochastic KdV type equation–Generalization for the inhomogeneous kinetic limit, arXiv, 2022. http://dx.doi.org/10.48550/arXiv.2210.17445
    [11] K. Hasselmann, On the non-linear energy transfer in a gravity-wave spectrum. Ⅰ. General theory, J. Fluid Mech., 12 (1962), 481–500. http://dx.doi.org/10.1017/S0022112062000373 doi: 10.1017/S0022112062000373
    [12] K. Hasselmann, On the non-linear energy transfer in a gravity wave spectrum. Ⅱ. Conservation theorems; wave-particle analogy; irreversibility, J. Fluid Mech., 15 (1963), 273–281. http://dx.doi.org/10.1017/S0022112063000239 doi: 10.1017/S0022112063000239
    [13] A. H. M. Kierkels, J. J. L. Velázquez, On the transfer of energy towards infinity in the theory of weak turbulence for the nonlinear Schrödinger equation, J. Stat. Phys., 159 (2015), 668–712. http://dx.doi.org/10.1007/s10955-015-1194-0 doi: 10.1007/s10955-015-1194-0
    [14] A. H. M. Kierkels, J. J. L. Velázquez, On self-similar solutions to a kinetic equation arising in weak turbulence theory for the nonlinear schrödinger equation, J. Stat. Phys., 163 (2016), 1350–1393. http://dx.doi.org/10.1007/s10955-016-1505-0 doi: 10.1007/s10955-016-1505-0
    [15] S. Nazarenko, Wave turbulence, Vol. 825, Springer Science & Business Media, 2011. http://dx.doi.org/10.1007/978-3-642-15942-8
    [16] L. Nordheim, On the kinetic method in the new statistics and application in the electron theory of conductivity, Proc. R. Soc. London. Ser. A, Containing Papers of a Mathematical and Physical Character, 119 (1928), 689–698. http://dx.doi.org/10.1098/rspa.1928.0126
    [17] R. Peierls, Zur kinetischen theorie der Wärmeleitung in Kristallen, Ann. Phys., 395 (1929), 1055–1101. http://dx.doi.org/10.1002/andp.19293950803 doi: 10.1002/andp.19293950803
    [18] G. Staffilani, M. B. Tran, On the wave turbulence theory for stochastic and random multidimensional KdV type equations, arXiv, 2021. http://dx.doi.org/10.48550/arXiv.2106.09819
  • This article has been cited by:

    1. Benno Rumpf, Avy Soffer, Minh-Binh Tran, On the wave turbulence theory: ergodicity for the elastic beam wave equation, 2025, 310, 0025-5874, 10.1007/s00209-025-03734-6
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1286) PDF downloads(134) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog