The parameterized local fractional singular integral operator T(τ) is defined on the space Lτμp(Rτ+) as T(τ):Lτμp(Rτ+)→Lτν(1−p)p(Rτ+), T(τ)(fτ)(y)=0Y(τ)+∞[|x−y|τα(x+y)τβfτ(x)],y∈R+. By employing the weight function method and analysis techniques on the fractal real line number set Rτ+, a general Hilbert-type local fractional integral inequality has been established, thereby demonstrating the boundedness of the defined integral operator. Through optimization of parameters, it was determined that the necessary and sufficient condition for the constant factor in this general Hilbert-type local fractional inequality to be the best possible is that the power parameters σ and σ1 satisfy σ+σ1=β−α. Consequently, the formula for calculating the operator norm has been derived.
Citation: Ling Peng, Qiong Liu. The construction conditions of a Hilbert-type local fractional integral operator and the norm of the operator[J]. AIMS Mathematics, 2025, 10(1): 1779-1791. doi: 10.3934/math.2025081
[1] | Carey Caginalp . A minimization approach to conservation laws with random initialconditions and non-smooth, non-strictly convex flux. AIMS Mathematics, 2018, 3(1): 148-182. doi: 10.3934/Math.2018.1.148 |
[2] | Aníbal Coronel, Alex Tello, Fernando Huancas . A characterization of the reachable profiles of entropy solutions for the elementary wave interaction problem of convex scalar conservation laws. AIMS Mathematics, 2025, 10(2): 3124-3159. doi: 10.3934/math.2025145 |
[3] | H. S. Alayachi, Mahmoud A. E. Abdelrahman, Kamel Mohamed . Finite-volume two-step scheme for solving the shear shallow water model. AIMS Mathematics, 2024, 9(8): 20118-20135. doi: 10.3934/math.2024980 |
[4] | Gözde Özkan Tükel, Tunahan Turhan . Timelike hyperelastic strips. AIMS Mathematics, 2025, 10(5): 12299-12311. doi: 10.3934/math.2025557 |
[5] | Ziying Qi, Lianzhong Li . Lie symmetry analysis, conservation laws and diverse solutions of a new extended (2+1)-dimensional Ito equation. AIMS Mathematics, 2023, 8(12): 29797-29816. doi: 10.3934/math.20231524 |
[6] | Gözde Özkan Tükel . Integrable dynamics and geometric conservation laws of hyperelastic strips. AIMS Mathematics, 2024, 9(9): 24372-24384. doi: 10.3934/math.20241186 |
[7] | Weibo Guan, Wen Song . Convergence rates of the modified forward reflected backward splitting algorithm in Banach spaces. AIMS Mathematics, 2023, 8(5): 12195-12216. doi: 10.3934/math.2023615 |
[8] | F. A. Mohammed . Soliton solutions for some nonlinear models in mathematical physics via conservation laws. AIMS Mathematics, 2022, 7(8): 15075-15093. doi: 10.3934/math.2022826 |
[9] | Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596 |
[10] | Tamara M. Garrido, Rafael de la Rosa, Elena Recio, Almudena P. Márquez . Conservation laws and symmetry analysis of a generalized Drinfeld-Sokolov system. AIMS Mathematics, 2023, 8(12): 28628-28645. doi: 10.3934/math.20231465 |
The parameterized local fractional singular integral operator T(τ) is defined on the space Lτμp(Rτ+) as T(τ):Lτμp(Rτ+)→Lτν(1−p)p(Rτ+), T(τ)(fτ)(y)=0Y(τ)+∞[|x−y|τα(x+y)τβfτ(x)],y∈R+. By employing the weight function method and analysis techniques on the fractal real line number set Rτ+, a general Hilbert-type local fractional integral inequality has been established, thereby demonstrating the boundedness of the defined integral operator. Through optimization of parameters, it was determined that the necessary and sufficient condition for the constant factor in this general Hilbert-type local fractional inequality to be the best possible is that the power parameters σ and σ1 satisfy σ+σ1=β−α. Consequently, the formula for calculating the operator norm has been derived.
Conservation laws, generally expressed in the form
wt+(H(w))x=0 in R × (0,∞)w(x,0)=g′(x) on R×{t=0} | (1.1) |
and the related Hamilton-Jacobi problem
ut+H(ux)=0 in R× (0,∞)u(x,0)=g(x) on R ×{t=0} | (1.2) |
for a smooth flux function H have a wide range of applications, including modelling shocks mathematical turbulence, and kinetic theory [3,4,5,8,9,10,12,14,15,16,17,18,19,20,21]. In Section 2, we review some background, based on [9], regarding well-established classical results for the conservation law (1.1) in the case of a flux with sufficient regularity conditions. We also show that these results can be extended in several ways, allowing new, broader application for much of this well-established theory. For example, we are able to prove several results in [9] with the much weaker condition of non-strict convexity assumed on the flux function rather than uniform convexity.
In addition to relaxing some of the convexity and regularity assumptions, we consider the specific case of a polygonal flux, a (non-strictly) convex sequence of piecewise linear segments. It has been studied extensively as a method of approximation and building up to the smooth case in Dafermos [6,7]. This choice of flux function notably eliminates several of the assumptions of the usual problem under consideration in that it is (ⅰ) not smooth, (ⅱ) not strictly convex, and (ⅲ) not superlinear. The Legendre transform is also not finite on the entire real line. We consider some of the results for smooth H and their possible extension to this case. Later in this analysis, it will be key to consider a smooth, superlinear approximation to H. We index this approximation by two parameters δ and ε, corresponding to smoothing and superlinearizing the flux function, respectively, and denote it by Hε,δ. In Section 3, we prove an existence result for the sharp, polygonal problem, in addition to several other results, without the properties of being uniformly convex or superlinear. We also consider the two different types of minimizers for the sharp problem, both at a vertex of the Legendre transform L or at a part of L where it is locally differentiable, and demonstrate how (1.3) will hold in various cases with these different species of minimizers.
For the smooth problem, it is well-known (i.e., [9]) that the minimizer obtained in closed-form solutions such as Hopf-Lax are unique a.e. in x for a given time t. Far more intricate behavior surfaces when one takes a less smooth flux function, as in our case with a piecewise linear flux. In particular, the convexity here is no longer uniform and not even strict. As a result, one can have not only multiple minimizers, but an infinite set of such points. This involves in-depth analysis of the structure of the minimizers used in methods such as Hopf-Lax or Lax-Oleinik. In Section 4, by considering the greatest of these minimizers y∗(x,t), or its supremum if not attained, we show that y∗(x,t) is in fact increasing in x. Further, by carefully considering the relative changes in this infinite and possibly uncountable number of minima, we prove rigorously the identity
w(x,t)=g′(y∗(x,t)). | (1.3) |
This expression relates the solution of the conservation law to the value of the initial condition evaluated at the point of the minimizer. This is a new result even under classical conditions and requires a deeper examination of multiple minimizers in the absence of uniform convexity. We also prove other results including that the solution is of bounded variation [25] under the appropriate assumptions on the initial conditions.
In Section 5, we consider the smoothed and superlinearized flux function Hε,δ. By condensing these two parameters into one and considering the minimizers of the smooth version, we obtain results relating to the convergence of these solutions to the polygonal case.
We can also define a particular kind of uniqueness when constructing the solution from a certain limit, as we take the aforementioned parameter ε↓0. Utilizing this definition, we are able to prove a uniqueness result using this smooth approach. This is elaborated on in Section 6.
In Section 7, we consider discontinuous initial conditions. When H is polygonal and g′ is piecewise constant with values that match the break points of H, the conservation law becomes a discrete combinatorial problem. We prove that (1.3) is valid, and w can also be obtained as a limit of solutions to the smoothed problem. This provides a link between the discrete and continuum conservation laws.
A further application of conservation laws includes the addition of randomness, such as that in the initial conditions. In doing computations and analysis relating to these stochastic processes, the identity (1.3) will be a key building block. We present some immediate conclusions in Section 8. For example, when applied to Brownian motion, we show that the variance is the greatest minimizer y∗(x,t) and increases with x for each t.
To illustrate the importance of this result, consider the basic problem wt+|w|x=0 subject to L∞ initial conditions w(x,0)=g′(x). As a consequence of our analysis, the solution at a point (x,t) is completely determined by the value of the initial condition on the interval [x−t,x+t]. Hence, the value of the solution is given by g(x−t) if the minimum of g(y) is at the left endpoint of the interval, g(x+t) if the minimum is at the right endpoint of the interval, and 0 for an interior minimum. Thus, for an initial condition g′ that is Brownian motion, one minimizes integrated Brownian motion and obtains the shock statistics. In a second paper, we plan to develop these ideas further.
We review briefly the basic theory (see [9]), and obtain an expression that will be more useful than the standard results when we relax the assumptions in order to incorporate polygonal flux. For now we assume that the flux function H(q):R→R is uniformly convex, continuously differentiable, and superlinear, i.e., lim|q|→∞H(q)/|q|=∞. The Legendre transformation is defined by
L(p):=supq∈R{pq−H(q)}. | (2.1) |
Here we use script L and H to indicate we are considering the problem with a smooth flux function, and in Section 3 we will use L and H when considering a piecewise linear flux function.
An initial value problem for the Hamilton-Jacobi problem, on R, is specified as
ut+H(ux)=0 | (2.2a) |
u(x,0)=g(x) | (2.2b) |
We call the function w a weak solution if it (ⅰ) u(x,t) satisfies the initial condition (2.2a) and the equation (2.2b) a.e. in (x,t) and (ⅱ) (see p. 131 [9]) for each t, and a.e. x and x+z, u(x,t) satisfies the inequality
u(x+z,t)−2u(x,t)+u(x−z,t)≤C(1+1t)z2 . | (2.3) |
The Hopf-Lax formula is defined by
u(x,t)=miny∈R{tL(x−yt)+g(y)}. | (2.4) |
The following classical results can be found in [9], p. 128 and 145.
Theorem 2.1. Suppose H is C2, uniformly convex and superlinear, and g is Lipschitz continuous. Then u(x,t) given by the Hopf-Lax formula (2.4) is the unique weak solution to (2.2).
Now we consider solutions to a related equation, the general conservation law
wt+(H(w))x=0 in R× (0,∞)w(x,0)=g′(x) on R×{t=0} | (2.5) |
Theorem 2.2. Assume that H is C2, uniformly convex, and g′∈L∞(R). Then we have
(i) For each t>0 and for all but countably many values x∈R, there exists a unique point y(x,t) such that
miny∈R{tL(x−yt)+g(y)}=tL(x−y(x,t)t)+g(y(x,t)) | (2.6) |
(ii) The mapping x→y(x,t) is nondecreasing.
(iii) For each t>0, the function w defined by
w(x,t):=∂∂x[miny∈R{tL(x−yt)+g(y)}] | (2.7) |
is in fact given by
w(x,t)=(H′)−1(x−y(x,t)t) |
To illuminate the notion of a weak solution, we briefly describe the motivation of the definition. Nominally, if we had a smooth function u that satisfied (2.2a) everywhere in (x,t) and the initial condition (2.2b) then we could multiply (2.2a) by the spatial derivative of the test function ϕ∈C∞c(R×(0,∞)) and integrate by parts to obtain
∫∞0{∫∞−∞uϕxt+H(ux)ϕxdx}dt+∫∞−∞gϕx|t=0dx=0 . | (2.8) |
Now we let w:=ux and integrate by parts in the x variable, (see [9] p. 148 for details and conditions). Note that u(x,t) is by assumption differentiable a.e. The test function is differentiable at all points, and so the product rule applies outside of a set of measure zero. Hence, one can integrate, and one then has
∫∞0{∫∞−∞wϕt+H(w)ϕxdx}dt+∫∞−∞g′ϕ|t=0dx=0 . | (2.9) |
We now say that w is a weak solution to the conservation law if it satisfies (2.9) for all test functions with compact support.
Remark 2.3. From classical theorems, we also know that under the conditions that g′ is continuous and H is C2 and superlinear, we have a unique weak solution to (2.7) that is an integral solution to the conservation law (2.5). However, at this stage we do not know if there are other solutions to (2.5) arising from a different perspective, where g is a differentiable function.
In order to obtain a unique solution to the conservation law, one imposes an additional entropy condition and makes the following definition.
Definition 2.4. We call w(x,t) an entropy solution to (2.5) if: (i) it satisfies (2.9) for all test functions ϕ:R×[0,∞)→R that have compact support and (ii) for a.e. x∈R, t>0, z>0, we have
w(x+z,t)−w(x,t)≤C(1+1t)z . | (2.10) |
In order to prove that w=ux is the unique solution to (2.5), we note the following: In Theorem 1, p. 145 of [9] it suffices for the initial condition to be continuous. In the theorem, the only use of the L∞(R) condition is that its integral is differentiable a.e. which is certainly guaranteed by the continuity.
Under the assumptions of Theorem 1, the Lemma of p. 148 of [9] states that, with G:=(H′)−1, the function w=ux, i.e.,
w(x,t)=∂xu(x,t)=∂xminy∈R{tL(x−yt)+g(y)}=G(x−y∗(x,t)t) | (2.11) |
satisfies the one-sided inequality
w(x+z,t)−w(x,t)≤Ctz . | (2.12) |
Once we have established that w is an entropy solution, the uniqueness of the entropy solution (up to a set of measure zero) is a basic result that is summarized in [9] (Theorem 3, p 149):
Theorem 2.5. Assume H is convex and C2. Then there exists (up to a set of measure 0), at most one entropy solution of (2.5).
Note that one only needs g′ to be L∞ in this theorem. One has then the classical result:
Theorem 2.6. Assume that H is C2, superlinear and uniformly convex, and that g′∈L∞. Then the function w(x,t) given by (2.11) is the unique entropy solution to the conservation law (2.9).
Note that we need the uniformly convexity assumption in order that the one-sided condition holds, which in turn is necessary for the uniqueness.
A classical result is that if y(x,t) is defined as a minimizer of
Q(y;x,t):=tL(x−yt)+g(y) | (2.13) |
then it is unique and the mapping x↦y(x,t) is non-decreasing, and hence, continuous except at countably many points x (for each t) and differentiable a.e., in x for each t. The Lax-Oleinik formula above, which expresses the solution w to the conservation law as a function of (H′)−1.
This formula, of course, utilizes the fact that H′ is strictly increasing, i.e., that H∈C2 and uniformly convex. Using similar ideas, we present a more useful formula that will be shown in later theorems to be valid even when the inverse of H′ does not exist. For these theorems we need the following notion to express the argument of a minimizer.
Definition 2.7. Let B be a measurable set and suppose that there is a unique minimizer y∗ for a quantity Q(y) such that
Q(y∗)=miny∈BQ(y). |
Define the function arg to mean that
y∗=:argminy∈BQ(y). |
In the case that the minimum is achieved over some collection of points in B, denote by arg+ the supremum of all such points, regardless of whether the supremum of this set is a minimizer itself.
Theorem 2.8. Let H∈C2 and convex and g∈C1. Suppose that for each (x,t), the quantity
y∗(x,t)=infy∈R{tL(x−yt)+g(y)} | (2.14) |
is well-defined, finite, and is attained for a unique value of y. Then
L′(x−y∗(x,t)t)=g′(y∗(x,t)) | (2.15) |
and w(x,t):=∂xminy∈R{tL(x−yt)+g(y)} is given by
w(x,t)=g′(y∗(x,t)) . | (2.16) |
Proof. From Section 3.4, Thm 1 of [9], we know that a minimizer of tL(x−yt)+g(y) (if unique) is differentiable a.e. in x. We then have the following calculations.
Since we are assuming that infy∈R{tL(x−yt)+g(y)}>−∞ and both L and g are differentiable, there exists a minimizer. Since L and g are differentiable, for any potential minimizer one has the identity
0=∂y{tL(x−yt)+g(y)} | (2.17) |
so that (for a.e. x) at a minimum, y∗(x,t), one has
−f(y∗,x,t):=L′(x−y∗(x,t)t)=g′(y∗(x,t)). | (2.18) |
We have then at any point x where y∗(x,t) is differentiable,
w(x,t):=∂xminy∈R{tL(x−yt)+g(y)} | (2.19) |
=∂x{tL(x−y∗(x,t)t)+g(y∗(x,t))}=tL′(x−y∗(x,t)t)⋅(−∂xy∗(x,t)t+1)+g′(y∗(x,t))∂xy∗(x,t). | (2.20) |
The previous identity implies cancellation of the first and third terms, yielding
w(x,t)=g′(y∗(x,t)) a.e. x∈R for each t>0. | (2.21) |
Note that the uniqueness of the minimizer is used in the second line of (2.19). If there were two minimizers, for example, then as we vary x, one of the minima might decrease more rapidly, and that would be the relevant minimum for the x derivative.
We now explore the case with two minimizers. Using the notation tL(x−yt)=:f(y;x,t) as defined above, we note that whenever we have a minimum of f(y;x,t)+g(y) at some y0 we must have
∂yf(y0;x,t)+g′(y0)=0. | (2.22) |
We are interested in computing w(x,t)=∂xminy∈R{f(y;x,t)+g(y)}. Suppose that there are two distinct minima, ˆy0 and ˜y0, with ˆy0<˜y0 at some point x0. Then we can define ˆy(x,t) and ˜y(x,t) as distinct local minimizers that are differentiable in x, and satisfy
limx→x0ˆy(x,t)=ˆy0 and limx→x0˜y(x,t)=˜y0 . | (2.23) |
Then as we vary x, the minima will shift vertically and horizontally. The relevant minima are those that have the largest downward shift, as the others immediately cease to be minima.
This means that
w(x,t)=min{∂x[f(ˆy(x,t);x,t)+g(ˆy(x,t))],∂x[f(˜y(x,t);x,t)+g(˜y(x,t))]}|x=x0 . | (2.24) |
Then, as the calculations in the proof of the theorem above show, one has
w(x,t)=min{g′(ˆy0),g′(˜y0)}=min{−∂yf(ˆy0,x0,t),−∂xf(˜y0;x,t)}. | (2.25) |
Since we are assuming that ˆy0<˜y0 and f′ is increasing, we see that the minimum of these two is −∂yf(˜y0;x0,t), yielding,
w(x,t)=−∂yf(˜y0;x0,t)=g′(˜y0). | (2.26) |
Now suppose that for fixed (x0,t) we have a set of minimizers {yα} with α∈A for some set A. Should A consist of a finite number of elements, an elementary extension of the above argument generalizes the result to the maximum of these minimizers.
Next, suppose that the set has an infinite number of members. The case where the supremum of this set is +∞ is degenerate and will be excluded by our assumptions. Thus, assume that for a given (x,t), the set {yα} is bounded, and call its supremum y∗. Then either y∗∈A, i.e., it must be a minimizer, or there is a sequence {yj} in A converging to y∗. If y∗∉A, then we have, similar to the assertion above, the identity
w(x,t)=infα∈A{−∂yf(yα;x,t)}. | (2.27) |
Since f∈C2 and yj→y∗ we see that
w(x,t)=−∂yf(y∗;x,t)=g′(y∗). | (2.28) |
Note that (2.28) is valid whether or not y∗ is a minimizer.
Suppose that f∈C2 is convex and that we have a continuum of minimizers again. Suppose further that f′ is nondecreasing, and there is an interval [a,b] of minimizers of {f(y;x,t)+g(y)}. Note that the form of f is such that we can write it as
f(y;x,t)=ˆf(y−x) | (2.29) |
with ˆf increasing. We can perform a calculation similar to the ones above by drawing the graphs of ˆf and g as a function of y at x0 as follows:
w(x,t)=∂xminy∈[a,b]{f(y;x0)+g(y)}=limδ→0miny∈[a,b]{ˆf(y−x0−δ)+g(y)}−miny∈[a,b]{ˆf(y−x0)+g(y)}δ. | (2.30) |
We are assuming that there is an interval y∈[a,b] of minimizers, such that f(y−x0)+g(y)=C1 for some constant C1. This means f ′(y−x0)=g′(y). Since C1 occurs on both parts of the subtraction, we can drop it. Using the mean value theorem we have
ˆf(y−x0−δ)=ˆf(y−x0)−δˆf ′(ζ) | (2.31) |
where ζ is between y−x0 and y−x0−δ. Using the identity ˆf ′(y−x0)=g′(y) we can write
w(x,t)=limδ→0miny∈[a,b]{−δˆf ′(ζ)}−0δ=limδ→0miny∈[a,b]{−ˆf ′(ζ)}=−ˆf ′(b−x0)=−f ′(b) | (2.32) |
since f′ is nondecreasing, and the minimum of −f′(y) is attained at the rightmost point.
Although we have only considered the cases where the set of minimizers is countable or an interval, this argument suffices for the general case. Indeed, the set A of minimizers will be measurable. If it has finite measure, it can be expressed as a countable union of disjoint closed intervals Aj, i.e., A=∪∞j=1Aj. It is then equivalent to apply the argument for the countable set of minimizers to the points yj=supAj and proceed as above. To illustrate these ideas, consider the following example.
Example 2.9. Let f(y;x):=(x−y)2 and g(y):=−y2 for y∈[a,b] and increase rapidly outside of [a,b], suppressing t. We have f(y;0)+g(y)=y2−y2=0 so all points in [a,b] are minimizers (see Figure 1). We want to calculate
∂xminy∈[a,b]{f(y;x)+g(y)}|x=0 | (2.33) |
i.e.,
limδ→0miny∈[a,b]{f(y;δ)+g(y)}−miny∈[a,b]{f(y;0)+g(y)}δ=limδ→0{δ−1miny∈[a,b]{(y−δ)2−y2}}=−2b. | (2.34) |
I.e., w(x,t):=∂xminy∈[a,b]{f(y;x)+g(y)}|x=0 is given by −∂yf(y;x) at the rightmost point of the interval [a,b]:
−∂yf(y;0)=−2b at y=b. | (2.35) |
Note that by continuity, we have the same conclusion if the interval is open at the right endpoint b.
Using the calculations in (2.22)–(2.32), we can improve Theorem 2.8 above by removing the "unique minimizer" restriction.
Theorem 2.10. Let H∈C2 and convex and g∈C1. Suppose that for each (x,t), the quantity
y∗(x,t)=arg+infy∈R{tL(x−yt)+g(y)} | (2.36) |
is well-defined (finite). Then
L′(x−y∗(x,t)t)=g′(y∗(x,t)) | (2.37) |
and w(x,t):=∂xminy∈R{tL(x−yt)+g(y)} is given by
w(x,t)=g′(y∗(x,t)) . | (2.38) |
Remark 2.11. The condition (2.36) is not difficult to satisfy, as we simply need g to be well-defined a.e. on some interval where L is finite.
Remark 2.12. Theorem 2.10 improves upon the classic theorem, which requires uniform convexity. By utilizing the concept of the greatest minimizer y∗, we are able to deal with non-unique minimizers and obtain an expression for the solution to the conservation law using only convexity and not requiring uniform or strict convexity.
We use the general theme of [9] and adapt the proofs to polygonal flux (i.e., not smooth or superlinear). We define the Legendre transform without the assumption of superlinearity on the flux function H. Although this causes its Legendre transform L to be infinite for certain points, one can still perform computations and prove results close to those of the previous section under these weaker assumptions, as L is used in the context of minimization problems..
The first matter is to make sure that we have the key theorem that H and L are Legendre transforms of one another. We do not need to use any of the theorems that rely on superlinearity. We only assume that L is Lipschitz continuous, which follows from the definition of H. We also assume that g (the initial condition for the Hamilton-Jacobi equation) is Lipschitz on specific finite intervals.
Throughout this section, we make the assumption that H(q) is polygonal convex with the line segments having slopes m1 at the left and mN+1 at the right, with break points c1<c2<...<cN, with c1<0<cN. The Legendre transform, L(q), defined below is then also polygonal and convex on [m1,mN+1] and infinite elsewhere. We will assume m1<0<mN+1. We illustrate this flux function and some of the properties of the Legendre transform in Figure 2.
Definition 3.1. We define the usual Legendre transform, denoted by L(p), as follows:
L(p):=supq∈R{pq−H(q)} |
A computation shows that this is a convex polygonal shape such that L(p)<∞ if and only if p∈[m1,mN+1]. It has break points at m1<m2<...<mN+1 and slopes c1,c2,..,cN. The last break point of L is at mN+1 where the slope and L(mN+1) become infinite. Note that L is Lipschitz on [m1,mN+1].
Lemma 3.2. Let L(p) be as defined above (with L(p)<∞ iff p∈[m1,mN+1] ). Then the Legendre transform L(p) of H(p) and the flux function H(q) itself satisfy the following duality condition:
L∗(q):=supp∈R{pq−L(q)}=H(q). |
In other words, if we define L(p) as polygonal, convex function between p∈[m1,mN+1] as in Figure 2, with L(p)=∞ for p∉[m1,mN+1] then the operation supp∈R{pq−L(q)} yields the function H defined above. One can then prove a set of lemmas that are the analogs of those in Section 3.4 of [9]. The proofs are adapted in order to handle potentially infinite values.
Lemma 3.3. Suppose L is defined as above and g is Lipschitz continuous on bounded sets. Then u defined by the Hopf-Lax formula is Lipschitz continuous in x, independently of t. Moreover,
|u(x+z,t)−u(x,t)|≤Lip(g)|z|. |
Proof. Fix t>0, x,ˆx∈R. Choose y∈R (depending on x,t) such that
u(x,t)=minz{tL(x−zt)+g(z)}=tL(x−yt)+g(y). | (3.1) |
The minimum is attained since both L and g are continuous. Note that while there may be values of (x−z)/t such that L((x−z)/t)=∞, these are irrelevant, as there are some finite values, and x−yt will be one of those. Now use (3.1) to write
u(ˆx,t)−u(x,t)=inf˜z{tL(ˆx−˜zt)+g(˜z)}−tL(x−yt)−g(y) . | (3.2) |
We define z:=ˆx−x+y such that
x−yt=ˆx−zt | (3.3) |
and substitute this z in place of ˜z in (3.2) which can only increase the RHS. This yields the inequality
u(ˆx,t)−u(x,t)≤tL(ˆx−zt)+g(z)−tL(x−yt)−g(y)=tL(x−yt)+g(ˆx−x+y) −tL(x−yt)−g(y)=g(ˆx−x+y)−g(y). | (3.4) |
Using the assumption that g is Lipschitz on bounded sets, one has
u(ˆx,t)−u(x,t)≤Lip(g)⋅|ˆx−x| . | (3.5) |
Note that in obtaining this inequality, x and ˆx were arbitrary (without any assumption on order). Hence, we can interchange them. I.e., we start by defining y such that, instead of (3.1), it satisfies
u(ˆx,t)=tL(ˆx−yt)+g(y). | (3.6) |
Thus, we obtain the same inequality as (3.5) with the x and ˆx interchanged, yielding
|u(ˆx,t)−u(x,t)|≤Lip(g)⋅|ˆx−x| . | (3.7) |
Lemma 3.4. Suppose L is defined as above and g is Lipschitz continuous on bounded sets. For each x∈R and 0≤s<t we have
u(x,t)=miny∈R{(t−s)L(x−yt−s)+u(y,s)}. |
Proof. Fix y∈R, 0<s<t. Since u and L are continuous, the minimum is attained on the interval [m1,mN+1] where L is finite. Thus we can find z∈R such that
u(y,s)=sL(y−zs)+g(z). | (3.8) |
Note that since z is the minimizer of L((y−z)/s), we know that L((y−z)/s) is finite.
By convexity of L we can write
x−zt=(1−st)(x−yt−s)+st(y−zs) |
L(x−zt)≤(1−st)L(x−yt−s)+stL(y−zs). | (3.9) |
Next, we have from our basic assumption that u(x,t) is defined by the Hopf-Lax formula, the identity
u(x,t)=minˆz{tL(x−ˆzt)+g(ˆz)} | (3.10) |
so substituting the z defined above in (3.8) yields the inequality
u(x,t)≤tL(x−zt)+g(z) | (3.11) |
and now using (3.9) yields
u(x,t)≤(t−s)L(x−yt−s)+sL(y−zs)+g(z). | (3.12) |
Now note that the last two terms, by (3.8) are u(y,s). This yields
u(x,t)≤(t−s)L(x−yt−s)+u(y,s). | (3.13) |
Note that y has been arbitrary. Now we take the minimum over all y∈R. We note that there are values of y for which the right hand side of (3.13) is infinite, but given 0<s<t and x∈R, there will be some y∈R such that (x−y)/(t−s) falls in the finite range of L. Thus, in taking the minimum, the values for which it is infinite are irrelevant, and we have
u(x,t)≤miny∈R{(t−s)L(x−yt−s)+u(y,s)}. | (3.14) |
Next, we know again that there exists w∈R (depending on x and t that we regard as fixed) such that
u(x,t)=tL(x−wt)+g(w). | (3.15) |
We choose y:=stx+(1−st)w, which implies
x−yt−s=x−wt=y−ws. | (3.16) |
We know that w is the minimizer (and of course, g is finite in the domain [m1,mN]) so that L((x−w)/t) is finite. Thus, by the identity (3.16) above, so are L((x−w)/t) and L((y−w)/s). Thus, using the basic definition of u(y,s) in the equality, one has
(t−s)L(x−yt−s)+u(y,s)=(t−s)L(x−yt−s)+minˆz{sL(y−ˆzt)+g(ˆz)}≤(t−s)L(x−yt−s)+{sL(y−wt)+g(w)} | (3.17) |
where the inequality is obtained simply by substituting a particular value for ˆz, namely the w that we defined above in (3.15).
We can use (3.16) in order to re-write the arguments of L in equivalent forms. By the equality and the fact that L((x−w)/t) is finite, so are L((x−y)/(t−s)) and L((y−w)/s). Hence, replacing the two expressions involving L on the RHS of (3.9) yields
(t−s)L(x−yt−s)+u(y,s)≤(t−s)L(x−wt)+{sL(x−wt)+g(w)}=tL(x−wt)+g(w)=u(x,t) | (3.18) |
where the last identity follows from the expression (3.15) that defines w. Thus (3.18) gives us an identity for a particular y that we defined, namely
(t−s)L(x−yt−s)+u(y,s)≤u(x,t). | (3.19) |
If we replace y by the minimum over all ˆy we obtain the inequality
minˆy∈R{(t−s)L(x−yt−s)+u(y,s)}≤u(x,t). | (3.20) |
Combining (3.14) and (3.20) proves Lemma 3.4.
Lemma 3.5. If L and g are Lipschitz continuous, one has u(x,0)=g(x).
Note that this is the analog of Lemma 2 -proof in part 2 of Evans. Part 2 is essentially the same; one needs only pay attention to finiteness of the terms.
Proof. Since 0∈[m1,mN+1], the interval on which L is finite, one has
u(x,t)=miny∈R{tL(x−yt)+g(y)}≤tL(0)+g(x), | (3.21) |
upon choosing y=x. Also,
u(x,t)=miny∈R{tL(x−yt)+g(y)}≥miny∈R{tL(x−yt)+g(x)−Lip(g)⋅|x−y|}=g(x)+miny∈R{tL(x−yt)−Lip(g)⋅|x−y|}=g(x)−tmaxy∈R{Lip(g)⋅|x−y|t−L(x−yt)}=g(x)−tmaxz∈R{|z|Lip(g)−L(z)}. | (3.22) |
Note that maxz∈R{|z|Lip(g)−L(z)} is a finite number since −L(z) is bounded above and is −∞ outside of the range [m1,mN+1]. Thus we define
C:=max{|L(0)|,maxz∈R{|z|Lip(g)−L(z)}} | (3.23) |
and combine (3.21) and (3.22) to write
|u(x,t)−g(x)|≤Ct for all x∈R and t>0. | (3.24) |
Lemma 3.6. (a) If L and g are Lipschitz, one has for any x∈R and 0<ˆt<t the inequalities
|u(x,t)−u(x,ˆt)|≤C|t−ˆt| | (3.25) |
C:=max{|L(0)|, maxz∈R{|z|Lip(g)−L(z)}}. | (3.26) |
(b) Under the conditions of Lemmas 3.3 and 3.6 one has for some C
|u(ˆx,ˆt)−u(x,t)|≤C‖(ˆx,ˆt)−(x,t)‖2 | (3.27) |
where ‖⋅‖2 is the usual Euclidean norm.
(c) If L and g are Lipschitz continuous then u:R2→R is differentiable on R×(0,∞)a.e.
Proof.(a) Let x∈R and 0<ˆt<t. By Lemma 3.3 one has
|u(x,t)−u(ˆx,t)|≤Lip(g)|x−ˆx| . | (3.28) |
From Lemma 3.4, we have for 0≤s=ˆt<t, the inequality
u(x,t)=miny{(t−s)L(x−yt−s)+u(y,s)}≥miny{(t−s)L(x−yt−s)+u(x,s)−Lip(u)⋅|x−y|}=u(x,s)+miny{(t−s)L(x−yt−s)−Lip(g)⋅|x−y|}=u(x,s)+(t−s)miny{L(x−yt−s)−Lip(g)⋅|x−y|t−s}=u(x,s)+(t−s)minz{L(z)−Lip(g)⋅|z|} | (3.29) |
where we have just defined z:=|x−y|/(t−s). We can also write this as
u(x,t)≥u(x,s)−(t−s)maxz{Lip(g)⋅|z|−L(z)}. | (3.30) |
Using C1:=maxz{Lip(g)⋅|z|−L(z)} which, as discussed above, is clearly finite, one has then
u(x,t)−u(x,s)≥−C1(t−s). | (3.31) |
The other direction in the inequality follows from Lemma 3.4 directly. Substituting x in place of the y in the minimizer, we have
u(x,t)=miny{(t−s)L(x−yt−s)+u(y,s)}≤(t−s)L(x−xt−s)+u(x,s)=(t−s)L(0)+u(x,s) | (3.32) |
yielding the inequality
u(x,t)−u(x,s)≤(t−s)L(0). | (3.33) |
Combining (3.32) and (3.33) yields the Lipschitz inequality in t, namely,
|u(x,t)−u(x,s)|≤C|t−s| . | (3.34) |
(b) This follows from the triangle inequality, Lemma 3.3 and part (a).
(c) This follows from Rademacher's theorem and part (b).
Analogous to theorems in Section 3.3, [9], we have the following three theorems. The key idea here is that our versions allow one to deal with the introduction of potentially infinite values of the Legendre transform of the flux function.
Theorem 3.7. Let x∈R and t>0. Let u be defined by the Hopf-Lax formula and differentiable at (x,t)∈R×(0,∞). Then
ut(x,t)+H(ux(x,t))=0. |
Proof. Fix q∈[m1,mN+1] and h>0. By Lemma 3.4, we have
u(x+hq,t+h)=miny∈R{hL(x+hq−yh)+u(y,t)}. | (3.35) |
Once again since there are some finite values over which we are taking the minimum, the expression is well-defined. Upon setting y as x, we can only obtain a larger quantity on the RHS, yielding
u(x+hq,t+h)≤hL(q)+u(x,t). | (3.36) |
Hence, for q∈[m1,mN+1], we have the inequality
u(x+hq,t+h)−u(x,t)h≤L(q). | (3.37) |
Since we are assuming that u is differentiable at (x,t) we have the existence of the limit of the LHS of (3.37) thereby yielding
∂tu(x,t)+qDu(x,t)≤L(q), i.e.,∂tu(x,t)+qDu(x,t)−L(q)≤0. | (3.38) |
We now use the equality, supq∈R{qw−L(q)}=:H(w), writing
H(Du(x,t))=supq∈R{qDu−L(q)}. | (3.39) |
Note that the values of q for which L(q)=∞ are clearly not candidates for the supp since −L(q)=−∞. Hence, we can take the sup over all q that satisfy (3.38), (which is equivalent to taking the sup over q∈[m1,mN+1]) to obtain
∂tu(x,t)+supq∈R{∂tu(x,t)+qDu(x,t)−L(q)}≤0, or∂tu(x,t)+H(Du(x,t))≤0. | (3.40) |
Now use the definition
u(x,t)=miny∈R{tL(x−yt)+g(y)}. | (3.41) |
Since L and g are continuous, the minimizer exists and for some z∈R depending on (x,t) we have
u(x,t)=tL(x−zt)+g(z). | (3.42) |
Define s:=t−h, y:=stx+(1−st)z so x−zt=y−zs. Then we can write, using the definition of u(y,s),
u(x,t)−u(y,s)=tL(x−zt)+g(z)−minˆy{sL(y−ˆys)+g(ˆy)}. | (3.43) |
By substituting z (defined by (3.40)) in place of ˆy in this expression, we subtract out at least as much and obtain the inequality
u(x,t)−u(y,s)≥tL(x−zt)+g(z)−{sL(y−zs)+g(z)}=(t−s)L(x−zt) | (3.44) |
by virtue of the equality x−zt=y−zs. Note that by definition, L(x−zt)=L(y−zs)<∞, so there is no divergence problem there. Now replace y with its definition above, and use t−s=h to write (3.44 as
u(x,t)−u(x+ht(z−x),t−h)h≥L(x−zt). | (3.45) |
Since we are assuming that the derivative exists at (x,t) we can take the limit as h→0+ and obtain
x−ztDu(x,t)+∂tu(x,t)≥L(x−zt) | (3.46) |
We use the definition of H again, and write
ut(x,t)+H(Du(x,t))=ut(x,t)+maxq∈R{qDu−L(q)}≥ut(x,t)+x−ztDu(x,t)−L(x−zt) | (3.47) |
where we have chosen one value of q, namely (x−z)/t to obtain the inequality. Note, again, that from the original definition in (3.42), L(x−zt) must be finite. Hence, the RHS of (3.47) is well-defined. Combining (3.46) and (3.47) yields the inequality
∂tu(x,t)+H(Du(x,t))≥0. | (3.48) |
Combining (3.48) with (3.42), we obtain the result that u satisfies the Hamilton-Jacobi equation at (x,t).
Theorem 3.8. The function u(x,t) defined by the Lax-Oleinik formula [9] is Lipschitz continuous, differentiable a.e. in R×(0,∞) and solves the initial value problem
ut+H(ux)=0 a.e. in R×(0,∞)u(x,0)=g(x) for all x∈R . |
Definition 3.9. We say that w∈L∞(R×(0,∞)) is an integral solution of
wt+H(w)x=0 in R×(0,∞)w(x,0)=h(x) for all x∈R |
if for all test functions ϕ:R×[0,∞)→R (i.e., ϕ that are smooth and have compact support) one has the identity
∫∞0∫∞−∞wϕtdxdt+∫∞−∞hxϕdx|t=0+∫∞0∫∞−∞H(w)ϕxdxdt=0. |
Theorem 3.10. Under the assumptions that g is Lipschitz and that H is polygonal and convex (as described above), the function w(x,t):=∂xu(x,t) where u is the Hopf-Lax solution (2.4) is an integral solution of the initial value problem for the conservation law above.
Remark 3.11. We follow the notation of [9] in calling the solution described in 3.10 an integral solution. This kind of solution is often also referred to as a solution in the distribution sense.
This is the analog of Theorem 2, Section 3.4 of [9], but the statement of the theorem there is somewhat different.
Proof. From Theorem 3.8 above we know that u is Lipschitz continuous, differentiable a.e. in R×(0,∞) and solves the Hamilton-Jacobi initial value problem subject to initial condition h(x) where g(x)=∫x0h(z)dz. We multiply ut+H(ux)=0 with a test function ϕx and integrate over ∫∞0∫∞−∞...dxdt. Upon integrating by parts one obtains the relation above in the definition of integral solution. The integration by parts operations are justified by the fact that u(x,t) is Lipschitz in both x and t. Also,
w(x,0)=∂xu(x,0)=g′(x). |
Notably, we have used the largest of the minimizers rather than the least to improve on the result of [9] by requiring only convexity instead of uniform convexity of the flux function.
Theorem 4.1. Suppose H is polygonal (with finitely many break points), convex, H(0)=0, and g is differentiable. Then for any (x,t) there exists y∗(x,t) that is defined as the greatest minimizer, i.e.,
miny∈R{tL(x−yt)+g(y)}={tL(x−y∗(x,t)t)+g(y∗(x,t))} | (4.1) |
and any other number ˆy that minimizes the left hand side satisfies ˆy≤y∗.
Remark 4.2. By using the largest minimizer instead of the least as in the classical theorems, we obtain a particular inequality below that is a consequence of convexity rather than from the stricter assumption of uniform convexity.
Remark 4.3. (Minimizers) We first illustrate the key idea. The minimizers must either be on the vertices of L or on the locally differentiable parts of L. We suppress t and suppose x=0. For fixed x,t in order to have a non-isolated set of minimizers of f(y;x,t)+g(y), they need to be on the differentiable part of f (i.e., non-vertex). This latter case means that on some interval, e.g., [a,b] one has f(y;0,t)+g(y)=0 (note that we can always shift up or down, so we can adjust the constant to 0). On this stretch of y we can write
f(y;0)=my and −g(y)=my |
Thus all y∈[a,b] are minimizers. If we increase x slightly we obtain
f(y;x)=m(y−x) and −g(y)=my |
so that
f(y;x)+g(y)=m(y−x)−my=−mx. |
This means that the minimum is less (if x>0) but again, all y are minimizers. In computing the derivative
∂xminy{f(y;x)+g(y)} |
we see that we can use any y and we will obtain the same result. I.e., if ˆy is any minimizer, then we have (as one can see graphically, or from computation), where g is differentiable,
∂xminy{f(y;x)+g(y)}=−∂yf(ˆy;x)=g′(ˆy). |
Hence, we can take for example the largest of these ˆy, or supˆy since the derivatives are constant (and, f and g are C1 so we can use continuity).
Proof. Note first that by definition of the Legendre transform, L(q), one has that L(q)<∞ if and only if q∈[m1,mN+1]. Also, the interval is closed, since one has for some c∈R,
L(mN+1)=supq{mN+1q−H(q)}=c | (4.2) |
for some constant c, and similarly at the m1 endpoint. Hence, if we define, for fixed x and t,
v(y):=tL(x−yt)+g(y) | (4.3) |
and note that v is continuous, since, by assumption g and L are also continuous.
A continuous function on a closed, bounded interval attains at least one minimizer, which we will call y1∈[x−mN+1t,x−m1t] and denote v(y1)=:b. Note that v is infinite outside of this interval. Now for any index set A, let {yα}α∈A be the set of points such that yα∈[x−mN+1t,x−m1t] and v(yα)=v(y1). Let y∗:=sup{yα:α∈A} which exists since it is a bounded set of reals. Thus there is a sequence {yj}∞j=1 such that limj→∞yj=y∗. By continuity of v one has then that
v(y∗)=limj→∞v(yj)=limj→∞b=b. | (4.4) |
Thus, y∗ is a minimizer, there is no minimizer that is greater than y∗, and it is unique by definition of supremum.
We write y∗(x,t) as the largest minimizer for a given x and t. We define y∗1:=y∗(x1,t) and y∗2=y∗(x2,t). We have the following, analogous to [9] but using convexity that may not be strict.
Lemma 4.4. If x1<x2, then
tL(x2−y∗1t)+g(y∗1)≤tL(x2−yt)+g(y) if y<y∗1. |
Proof. For any λ∈[0,1] and r,s∈R we have from convexity
L(λr+(1−λ)s)≤λL(r)+(1−λ)L(s). | (4.5) |
Now let x1<x2 and for y∈y∗1 define λ by
λ:=y∗1−yx2−x1+y∗1−y . | (4.6) |
By assumption, x1<x2 and y<y∗1 so that λ∈(0,1). Let r:=(x1−y∗1)/t and s:=(x2−y)/t. A computation shows
λr+(1−λ)s=(x2−y)/t . | (4.7) |
This yields
L(x2−y∗1t)≤λL(x1−y∗1t)+(1−λ)L(x2−yt). | (4.8) |
Next, we interchange the roles of r and s, i.e, ˆr:=(x2−y)/t and ˆs:=(x1−y∗1)/t and note that
λˆr+(1−λ)ˆs=(x1−y)/t. | (4.9) |
Thus, convexity implies
L(x1−yt)≤(1−λ)L(x1−y∗1t)+λL(x2−yt). | (4.10) |
Adding (4.8) and (4.10) yields,
L(x2−y∗1t)+L(x1−yt)≤L(x1−y∗1t)+L(x2−yt). | (4.11) |
By definition of y∗1:=y∗(x1,t) as a minimizer for x1 (with t still fixed) we have
tL(x1−y∗1t)+g(y∗1)≤tL(x1−yt)+g(y) . | (4.12) |
Upon multiplying (4.11) by t and adding to (4.12) we obtain, as two of the L terms cancel,
tL(x2−y∗1t)+g(y∗1)≤tL(x2−yt)+g(y), | (4.13) |
provided, still, that y<y∗1. Hence, if y∗2 is the largest minimizer for x2, it must be greater than or equal to y∗1. I.e., any value y<y∗1 satisfies (4.13) so it could not be a larger minimizer than y∗1.
An immediate consequence of this result is the following:
Theorem 4.5. For each fixed, t, as a function of x, y∗(x,t) is non-decreasing and is equal a.e. in x to a differentiable function.
Proof. From previous calculations we know that L is polygonal convex and is finite only in the closed interval [m1,mN+1]. Also, we have L≥0. Now we apply Lemma 4.4 as follows. By definition of y∗2 we have
miny∈R{tL(x2−yt)+g(y)}={tL(x2−y∗2(x,t)t)+g(y∗2(x,t))}. | (4.14) |
By Proposition 3.2, we have that for any y<y∗1, the expression{tL(x2−yt)+g(y)} is already equal to or greater than tL(x2−y∗1t)+g(y∗1), so there cannot be a minimizer that is less than y∗1. Hence, we conclude y_{2}^{\ast}\geq y_{1}^{\ast}.
This means that with y^{\ast}\left(x, t\right) defined as the largest value that minimizes tL\left(\frac{x_{2}-y}{t}\right) +g\left(y\right) , i.e.
y^{\ast}\left( x, t\right) = \arg^{+}\min\left\{ tL\left( \frac{x_{2}-y}% {t}\right) +g\left( y\right) \right\} |
we have that the function y^{\ast} is a non-decreasing function of x for any fixed t>0. This implies that it is continuous except for countably many values of x. Also, for each t>0, one has that y^{\ast}\left(x, t\right) is equal a.e. to a function \tilde{y}\left(x\right) such that \tilde{y} is differentiable in x and one has \tilde{y}\left(x\right) = \int_{0} ^{x}\tilde{y}^{\prime}\left(s\right) ds+z\left(x\right) where z is non-decreasing and z' = 0 except on a set of measure zero ([23], p. 157).
Lemma 4.6. Let g be differentiable,
v\left( x, t\right) : = tL\left( \frac{x-y}{t}\right) +g\left( y\right) , | (4.15) |
w\left( x, t\right) : = \partial_{x}v\left( x, t\right) = \partial_{x}% \min\limits_{y\in\mathbb{R}}\left\{ tL\left( \frac{x-y}{t}\right) +g\left( y\right) \right\} . | (4.16) |
Suppose that \hat{y}\left(x, t\right) is the unique minimizer of v\left(x, t\right) and that L\left(\frac{x-y}{t}\right) is differentiable at \hat{y}. Then
w\left( x, t\right) = g^{\prime}\left( \hat{y}\left( x, t\right) \right) = L'\left( \frac{x-\hat{y}}{t}\right) . | (4.17) |
Remark 4.7. When we take the derivative of the minimum, note that the uniqueness of the minimizer is the key issue. If there is more than one minimizer, as we vary y in order to take the derivative, one of the minimizers may become irrelevant if the other minimum moves lower. This issue will be taken up in the subsequent theorem.
Proof. Suppose that x and t are fixed and that \hat{y}\left(x, t\right) is the unique minimizer of v\left(x, t\right). Since L, g are differentiable, and \hat{y}\left(x, t\right) is also differentiable (since \hat{y} is the only minimizer we can apply the previous result on the greatest minimizer), we have the calculations:
0 = \partial_{y}\left\{ tL\left( \frac{x-y}{t}\right) +g\left( y\right) \right\}, | (4.18) |
i.e.
L^{\prime}\left( \frac{x-\hat{y}\left( x, t\right) }{t}\right) = g^{\prime }\left( \hat{y}\left( x, t\right) \right) . | (4.19) |
Note that the minimum of v will not occur at the minimum of g unless L has slope zero. We have then
\begin{align} w\left( x, t\right) & : = \partial_{x}\min\limits_{y\in\mathbb{R}}\left\{ tL\left( \frac{x-y}{t}\right) +g\left( y\right) \right\} \nonumber\\ & = \partial_{x}\left\{ tL\left( \frac{x-\hat{y}\left( x, t\right) }% {t}\right) +g\left( \hat{y}\left( x, t\right) \right) \right\} \nonumber\\ & = tL^{\prime}\left( \frac{x-\hat{y}\left( x, t\right) }{t}\right) \cdot\left( \frac{-\partial_{x}\hat{y}\left( x, t\right) }{t}+1\right) +g^{\prime}\left( \hat{y}(x, t)\right) \partial_{x}\hat{y}\left( x, t\right) . \end{align} | (4.20) |
The previous identity implies cancellation of the first and third terms, yielding
w\left( x, t\right) = g^{\prime}\left( \hat{y}\left( x, t\right) \right) . | (4.21) |
Lemma 4.8. Let x, t be fixed, and assume the same conditions on L and g. If \hat{y}\left(x, t\right) is the unique minimizer of v\left(x, t\right) and occurs at a vertex of L\left(\frac{x-y}{t}\right), then
w\left( x, t\right) = g^{\prime}\left( \hat{y}\left( x, t\right) \right) . |
Proof. Note that L\left(z\right) < \infty if and only if z\in\left[m_{1}, m_{N+1}\right] . Since the vertical coordinate of the vertex of L remains constant as one increases x, the change in the minimum is equal to g^{\prime}\left(y\right) at that point. I.e., one has
\begin{align} w\left( x, t\right) & = \partial_{x}\min\limits_{y\in\mathbb{R}}\left\{ tL\left( \frac{x-y}{t}\right) +g\left( y\right) \right\} \nonumber\\ & = \partial_{x}\left\{ tL\left( \frac{x-\hat{y}\left( x, t\right) }% {t}\right) +g\left( \hat{y}\left( x, t\right) \right) \right\} \nonumber\\ & = g^{\prime}\left( \hat{y}\left( x, t\right) \right) . \end{align} | (4.22) |
At the endpoints, y = x-m_{N+1}t and y = x-m_{1}t the situation is the same, since as one varies x, the value of \mathcal{L} on one side has an infinite slope (see Figure 2). Note that this argument does not depend on \hat{y} being differentiable.
Theorem 4.9. Let g be differentiable and L convex polygonal as above. For fixed t>0 and a.e. x one has
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) . | (4.23) |
Proof. Since g is differentiable (for all x) any minimum of \left\{ tL\left(\frac{x-y}{t}\right) +g\left(y\right) \right\} must occur on a point, \hat{y}\left(x, t\right) where L has a vertex (including the endpoints, see Figure 3) or at a point where v\left(y\right) = 0. There are two possible types of minimizers, Type \left(A\right) occurring at the vertices of L, and Type \left(B\right) that occur at the differentiable (i.e., flat part of L). These two types are illustrated in Figure 3. From the lemmas above, we know that when there is a single minimizer, \hat{y}, the conclusion follows. Thus we consider the possibility of more than one minimizer, \hat{y}_{j}.
For a given x, t it is clear that there can only be finitely many minimizers of Type \left(A\right), i.e., the number of vertices. Although there may be infinitely many minimizers, \hat{y}_{j} of the Type \left(B\right) we know that g^{\prime}\left(\hat{y}\left(x, t\right) \right) = L^{\prime}\left(\frac{x-\hat{y}}{t}\right) so that there are only finitely many values of g^{\prime}\left(\hat{y}\left(x, t\right) \right) regardless of the type of minimizer. For any x, t we let y^{\ast }\left(x, t\right) be the largest of the minimizers, which is certainly well-defined since there are only finitely many minimizers. From the earlier theorem, we know that y^{\ast}\left(x, t\right) is increasing in x (for fixed t>0) and differentiable for a.e. x. In fact, if we focus on any minimizer, \hat{y}_{j}\left(x, t\right) we see that \partial_{x}\hat {y}_{j}\left(x, t\right) exists for either type of minimum. If it is Type \left(A\right) then as we vary x, the vertex moves and the minimum shifts along the curve of g. Since g is differentiable, the location of the minimum varies smoothly in x, so \partial_{x}\hat{y}_{j}\left(x, t\right) exists. If it is Type \left(B\right) then both L and g are differentiable, so it is certainly true that \partial_{x}\hat{y}% _{j}\left(x, t\right) exists.
For fixed \left(x, t\right) and each of the finitely many values of g^{\prime}\left(\hat{y}\left(x, t\right) \right) we can determine the minimum of g^{\prime}\left(\hat{y}\left(x, t\right) \right) = :m (i.e., m depending on \left(x, t\right) ). First, they may correspond to vertices. There are at most M of those, since there can only be one minimizer for each vertex. Then we have a class of minimizers for each segment of L\left(\frac{x-y}{t}\right) , i.e. M+1 of those. We can take the largest minimizer in each class, since g^{\prime} will be the same in each class. When we differentiate with respect to x, we compare each of the J (which is an integer between 1 and 2M+1) minimizers. It is the least of these that will be relevant, since we are taking
\partial_{x}\min\limits_{y\in\mathbb{R}}\left\{ tL\left( \frac{x-\hat{y}_{j}}% {t}\right) +g\left( \hat{y}_{j}\right) \right\} . |
In other words, as we vary x, we want to know how this minimum varies. Thus a minimizer is irrelevant if tL\left(\frac{x-\hat{y}_{j}}{t}\right) +g\left(\hat{y}_{j}\right) does not move down as much as another of the \hat{y}_{j} as x increases. If l\not = k and g^{\prime}\left(\hat {y}_{k}\left(x, t\right) \right) >g^{\prime}\left(\hat{y}_{l}\left(x, t\right) \right) then \hat{y}_{k}\left(x, t\right) is irrelevant for points beyond x. On the other hand, if we have g^{\prime}\left(\hat{y}_{k}\left(x, t\right) \right) = g^{\prime}\left(\hat{y}_{l}\left(x, t\right) \right) then we obtain the same change in the minimum, and we can just take the larger of the two.
If we have minimizers that are of Type \left(B\right) then it is the furthest right segment that corresponds to the least g^{\prime} since we have the identity (see Lemma 4.6 above) L^{\prime}\left(\frac{x-\hat{y}\left(x, t\right) }{t}\right) = g^{\prime}\left(\hat {y}\left(x, t\right) \right) .
In other words, either the g^{\prime} values are identical on some interval, in which case we have w\left(x, t\right) = g^{\prime}\left(\hat{y} _{j}\left(x, t\right) \right) = g^{\prime}\left(\hat{y}_{l}\left(x, t\right) \right) for example, and we can take either minimizer and obtain the same value for w\left(x, t\right), or one value is greater and is thus irrelevant.
Alternatively, if the derivatives are different, then the smaller g^{\prime} is the only one that is relevant. In either case we can take the largest value of \hat{y} and we have
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) . |
One remaining question is whether we have a largest minimizer y^{\ast}\left(x, t\right). The segments (i.e., lines of L) are on closed and bounded intervals, the supremum of points \hat{y}\left(x, t\right) exists, and there is a sequence of points, \tilde{y}_{m} that converges to this supremum, \tilde{y}. Since L and g are continuous
\lim\limits_{m\rightarrow\infty}\left\{ tL\left( \frac{x-\tilde{y}_{m}}{t}\right) +g\left( \tilde{y}_{m}\right) \right\} = tL\left( \frac{x-\tilde{y}}% {t}\right) +g\left( \tilde{y}\right) . |
Thus, we must have that \tilde{y} = y^{\ast} and is the greatest of the minimizers.
Theorem 4.10. Suppose that g^{\prime} is BV. Then w\left(x, t\right) is BV in x for fixed t.
Proof. Since g^{\prime} is BV it can be written as the difference of two increasing functions, h_{1} and h_{2}. Then h_{i}\left(y_{\ast}\left(x, t\right) \right) are increasing (since they are increasing functions of increasing functions) and hence g^{\prime }\left(y_{\ast}\left(x, t\right) \right) is BV.
It is important to relate the solutions of the conservation law with the polygonal flux H to the solutions w^{\varepsilon}corresponding to the smoothed and superlinearized flux function \mathcal{H}^{\varepsilon}. In particular, \mathcal{H}^{\varepsilon} is also uniformly convex, and the hypotheses of the classical theorems are satisfied. Throughout this section we will assume g\in C^{1}.
There are two basic parts to this section. First, we show that H and \mathcal{H}^{\varepsilon} have Legendre transforms that are pointwise separated by C\varepsilon, i.e., \left\vert L\left(p\right) -\mathcal{L}^{\varepsilon}\left(p\right) \right\vert \leq C\varepsilon also, and hence a similar identity for f\left(y; x, t\right) = tL\left(\frac{x-y}{t}\right) . We will also show that if there is a unique pair of minimizers, y^{\varepsilon}\left(x, t\right) and y\left(x, t\right) then they also separated by at most \tilde{C}\varepsilon.
Second, we analyze \left\vert w^{\varepsilon}\left(x, t\right) -w\left(x, t\right) \right\vert for a single minimizer of tL\left(\frac{x-y}% {t}\right) +g\left(y\right) and demonstrate that the result can be extended for a minimum that is attained by multiple, even uncountably many, minimizers \left\{ y_{\alpha}\right\} .
Given a function that is locally integrable, one can mollify it using a standard convolution ([9], p. 741). Alternatively, we will use a mollification as in [11] in which the difference between a piecewise linear function and its mollification vanishes outside a small neighborhood of each vertex.
Lemma 5.1 (Smoothing). Suppose that G\left(y\right) is piecewise linear and satisfies
\begin{equation} G^{\prime}\left( y\right) = \left\{ \begin{array} [c]{ccc}% \alpha<0 & if & y<y_{m}\\ \gamma>0 & if & y>y_{m}, \end{array} \right. \end{equation} | (5.1) |
g\left(y\right) = \beta y for some \beta \in \left(\alpha, \gamma\right) and G_{\varepsilon}\left(y\right) is any function that satisfies
\sup\limits_{y\in A}\left\vert G_{\varepsilon}\left( y\right) -G\left( y\right) \right\vert \leq C_{1}\varepsilon | (5.2) |
for any given compact set A. Let y_{m}^{\varepsilon}: = \arg\min_{y \in A}\left\{ G_{\varepsilon}\left(y\right) -g\left(y\right) \right\}. Then one has\ y_{m}: = \arg\min\left\{ G\left(y\right) -g\left(y\right) \right\} and
\left\vert y_{m}^{\varepsilon}-y_{m}\right\vert \leq C_{2}\varepsilon | (5.3) |
where C_{2} depends on A, \alpha, \beta, \gamma.
Proof. The first assertion that y_{m} is the argmin follows from immediately from the properties assumed for g and G^{\prime}. To prove the second assertion, i.e., the bound \left\vert y_{m}^{\varepsilon}-y_{m}\right\vert \leq C\varepsilon, one defines \Phi\left(y\right) : = G\left(y\right) -g\left(y\right) and \Phi_{\varepsilon}\left(y\right) : = G_{\varepsilon}\left(y\right) -g\left(y\right). The graph of \Phi: = G-g is a v-shape with \Phi\left(0\right) = 0. We can draw parallel lines C\varepsilon above and below \Phi and observe that \varepsilon\Phi_{\varepsilon} lies within these lines, as illustrated in Figure 4.
Note that a necessary condition for y_{m}^{\varepsilon} to be the argmin of \Phi_{\varepsilon} is that
\Phi_{\varepsilon}\left( y_{m}^{\varepsilon}\right) \leq\Phi_{\varepsilon }\left( y_{m}\right) | (5.4) |
since \Phi_{\varepsilon}\left(y_{m}^{\varepsilon}\right) must be below all \Phi_{\varepsilon}\left(y\right) . Both of the quantities \Phi_{\varepsilon}\left(y_{m}^{\varepsilon}\right) and \Phi _{\varepsilon}\left(y_{m}\right) are within the bounds in the graph above so that we can write this inequality in the form
\Phi\left( y_{m}^{\varepsilon}\right) -C\varepsilon\leq\Phi_{\varepsilon }\left( y_{m}^{\varepsilon}\right) \leq\Phi_{\varepsilon}\left( y_{m}\right) \leq\Phi\left( y_{m}\right) +C\varepsilon | (5.5) |
Thus \Phi\left(y_{m}^{\varepsilon}\right) -\Phi\left(y_{m}\right) \leq2C\varepsilon, i.e., by definition of \Phi one has
\left( \alpha-\beta\right) \left( y_{m}^{\varepsilon}-y_{m}\right) \leq2C\varepsilon. | (5.6) |
So for y_{m}^{\varepsilon}>y_{m} we have the restriction
y_{m}^{\varepsilon}-y_{m}\leq\frac{2C}{\alpha-\beta}\varepsilon. | (5.7) |
In a similar way we obtain a restriction in the other direction and prove the lemma.
As discussed above, given any H\left(q\right) that is piecewise linear with finitely many break points, one can construct an approximation \mathcal{H}_{\varepsilon}\left(q\right) that has the following properties:
(a) \left\vert \mathcal{H}_{\varepsilon}\left(q\right) -H\left(q\right) \right\vert \leq C_{1}\varepsilon for all q\in\mathbb{R}, and (b) \mathcal{H}_{\varepsilon}\left(q\right) -H\left(q\right) = 0 if \left\vert q-c_{i}\right\vert >C_{2}\varepsilon where c_{i} is any break point of H.
Subsequently, all references to smoothing will mean that conditions (a) and (b) are satisfied. Note that H_{\varepsilon}\left(q\right) is convex (but not necessarily uniformly convex). Since it is smooth, we have \mathcal{H}_{\varepsilon}^{\prime\prime}\left(q\right) \geq0.
In order to have uniform convexity we can add to \mathcal{H}_{\varepsilon }\left(\cdot\right) a term \delta q^{2} where 0 < \delta\leq\varepsilon and define
\mathcal{H}_{\varepsilon, \delta}\left( q\right) : = H_{\varepsilon}\left( q\right) +\delta q^{2}. | (5.8) |
Then \mathcal{H}_{\varepsilon, \delta}^{\prime\prime}\left(q\right) >\delta>0 so that \mathcal{H}_{\varepsilon, \delta} is uniformly convex and has two continuous derivatives.
Lemma 5.2 (Approximate Legendre Transform). Let
L\left( p\right) : = \sup\limits_{q\in\mathbb{R}}\left\{ pq-H\left( q\right) \right\}, \ \ \mathcal{L}_{\varepsilon, \delta}\left( p\right) : = \sup\limits_{q\in\mathbb{R}}\left\{ pq-\mathcal{H}_{\varepsilon, \delta}\left( q\right) \right\} . | (5.9) |
and let A be the (compact) subset of \mathbb{R} where L(p) is finite. For any p\in A one has
\left\vert \mathcal{L}_{\varepsilon, \delta}\left( p\right) -L% \left( p\right) \right\vert \leq C\varepsilon. | (5.10) |
Remark 5.3. Note that since H has finite max and min slopes, outside this range we have \mathcal{L}\left(p\right) : = \sup_{q\in\mathbb{R}}\left\{ pq-H\left(q\right) \right\} = \infty . For \mathcal{L}_{\varepsilon, \delta}\left(p\right) we will have a very large, though not infinite value when p exceeds this range, so p outside this range is also irrelevant in terms of minimizers. Thus, without loss of generality we can restrict our attention to p in a compact set.
Proof. Note that \arg\max_{q}\left\{ pq-H\left(q\right) \right\} = \arg\min_{q}\left\{ H\left(q\right) -pq\right\} and similarly for H_{\varepsilon, \delta}\left(q\right) . We then use the Lemma above by defining
G_{\varepsilon}\left( q\right) : = \mathcal{H}_{\varepsilon, \delta}\left( q\right) -pq\ \ with\ \delta: = \varepsilon^{2}<1. |
We can then apply the previous Lemma, noting that \left\vert \mathcal{H}% _{\varepsilon, \varepsilon^{2}}\left(q\right) -H\left(q\right) \right\vert \leq C\varepsilon implies
\left\vert G_{\varepsilon}\left( q\right) -G\left( q\right) \right\vert \leq C\varepsilon\, |
to conclude that with q_{m}: = \arg\min G\left(q\right) and q_{m} ^{\varepsilon}: = \arg\min G_{\varepsilon}\left(q\right)
\left\vert q_{m}-q_{m}^{\varepsilon}\right\vert \leq C\varepsilon. |
Since G and G_{\varepsilon} are Lipschitz, one has
\begin{align*} \left\vert \mathcal{L}_{\varepsilon, \varepsilon^{2}}\left( p\right) -\mathcal{L}\left( p\right) \right\vert & = \left\vert \sup\limits_{q\in \mathbb{R}}\left\{ pq-\mathcal{H}_{\varepsilon, \varepsilon^{2}}\left( q\right) \right\} -\sup\limits_{q\in\mathbb{R}}\left\{ pq-H\left( q\right) \right\} \right\vert \\ & = \left\vert \left\{ pq_{m}^{\varepsilon}-\mathcal{H}_{\varepsilon , \varepsilon^{2}}\left( q_{m}^{\varepsilon}\right) \right\} -\left\{ pq_{m}-H\left( q_{m}\right) \right\} \right\vert \\ & \leq\left\vert p\right\vert \ \left\vert q_{m}^{\varepsilon}-q_{m}% \right\vert +\left\vert \mathcal{H}_{\varepsilon, \varepsilon^{2}}\left( q_{m}^{\varepsilon}\right) -H\left( q_{m}\right) \right\vert \\ & \leq\left\vert p\right\vert \ \left\vert q_{m}^{\varepsilon}-q_{m}% \right\vert +\left\vert \mathcal{H}_{\varepsilon, \varepsilon^{2}}\left( q_{m}^{\varepsilon}\right) -H\left( q_{m}^{\varepsilon}\right) \right\vert \\ & +\left\vert H\left( q_{m}^{\varepsilon}\right) -H\left( q_{m}\right) \right\vert \\ & \leq\tilde{C}\varepsilon \end{align*} |
since we noted above that \left\vert \mathcal{H}_{\varepsilon, \varepsilon^{2}}\left(q\right) -H\left(q\right) \right\vert \leq C\varepsilon and H is Lipschitz. This proves the Lemma.
We define
f\left( y;x, t\right) = tL\left( \frac{x-y}{t}\right), \ \ f_{\varepsilon , \delta}\left( y;x, t\right) = t\mathcal{L}_{\varepsilon, \delta}\left( \frac{x-y}{t}\right) |
and consider any fixed t\in\left[0, T\right] for some T\in\mathbb{R}% ^{+} and x, y on bounded intervals. The bounds on \mathcal{L} and \mathcal{L}_{\varepsilon, \delta} then imply
\left\vert f\left( y;x, t\right) -f_{\varepsilon, \delta}\left( y;x, t\right) \right\vert \leq C\varepsilon. |
Next, we claim that if there is a single minimizer for f\left(y; x, t\right) +g\left(y\right) , denoted by y^{\ast}\left(x, t\right) and y_{\varepsilon}^{\ast}\left(x, t\right) for f_{\varepsilon}\left(y; x, t\right) +g\left(y\right) , then y_{\varepsilon}^{\ast}\left(x, t\right) \rightarrow y^{\ast}\left(x, t\right) a.e. in x. This is proven in the same way as Lemma 5.2, and analyzed in Section 4. Note that if the minimum is not within C\varepsilon of the vertex (i.e. not near a minimum of g), then the conclusion will be immediate since the mollification does not extend more than a distance C\varepsilon from the vertex.
Now, we want to compare the solution w\left(x, t\right) with w^{\varepsilon}\left(x, t\right) and assert that for any t>0 and a.e. x one has
\lim\limits_{\varepsilon\rightarrow0}w^{\varepsilon}\left( x, t\right) = w\left( x, t\right) . |
If we assumed that there is a single minimizer, y^{\ast}\left(x, t\right) then the result would be clear from the relations
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) , \ w^{\varepsilon}\left( x, t\right) = g^{\prime}\left( y_{\varepsilon}% ^{\ast}\left( x, t\right) \right), \ |
and the fact that g^{\prime} is continuous in x and y_{\varepsilon} ^{\ast}\left(x, t\right) \rightarrow y^{\ast}\left(x, t\right) .
The subtlety is when we have more than one minimizer. Note from the earlier material on the sharp problem, we only need to consider finitely many minimizers, since there are only finitely many vertices, and only finitely many segments of L. The minimizers can be at the vertices, or they may be on the segments. But if they are on the segments, the minimizers are in finitely many classes that correspond to the same value of g^{\prime}\left(\hat {y}\left(x, t\right) \right) = L^{\prime}\left(\frac{x-\hat{y}\left(x, t\right) }{t}\right). Thus we can choose the larger of these two minimizers, for example.
We consider the two Types \left(A\right) and \left(B\right) and suppose first that there are two minimizers of the same type.
Type (A). Suppose that for some \left(x_{0}, t\right) we have \hat{y}_{1} and \hat{y}_{2} that are both Type \left(A\right) minimizers, i.e., at different vertices of L. If g^{\prime}\left(\hat {y}_{1}\right) < g^{\prime}\left(\hat{y}_{2}\right) then at some \varepsilon, the mollified versions will also satisfy g^{\prime}\left(\hat{y}_{1}^{\varepsilon}\right) < g^{\prime}\left(\hat{y}_{2}% ^{\varepsilon}\right). This means that for x\not = x_{0} only \hat{y}% _{1} and \hat{y}_{1}^{\varepsilon} will be relevant. Analogously, we have the opposite inequality. If we have g^{\prime}\left(\hat{y}_{1}\right) = g^{\prime}\left(\hat{y}_{2}\right) , then we may not have g^{\prime }\left(\hat{y}_{1}^{\varepsilon}\right) = g^{\prime}\left(\hat{y} _{2}^{\varepsilon}\right). However, since g^{\prime} is continuous, and we know that
\hat{y}_{1}^{\varepsilon}\rightarrow\hat{y}_{1}\ \ and\ \ \hat{y}% _{2}^{\varepsilon}\rightarrow\hat{y}_{2}% |
we have then
g^{\prime}\left( \hat{y}_{1}^{\varepsilon}\right) \rightarrow g^{\prime }\left( \hat{y}_{1}\right) \ \ and\ \ g^{\prime}\left( \hat{y}% _{2}^{\varepsilon}\right) \rightarrow g^{\prime}\left( \hat{y}_{2}\right) \ . |
Thus we have from our basic results w\left(x, t\right) = g^{\prime}\left(y^{\ast}\left(x, t\right) \right) and w^{\varepsilon}\left(x, t\right) = g^{\prime}\left(y_{\varepsilon}^{\ast}\left(x, t\right) \right) , that
w^{\varepsilon}\left( x, t\right) \rightarrow w\left( x, t\right) . |
In Figure 5(a), we illustrate the case where g^{\prime }\left(y_{1}\right) = g^{\prime}\left(y_{2}\right).
Type (B). In this case we are on the straight portion of L, and the minimum must occur (as discussed in the earlier section) when \partial_{y}\left\{ L\left(\frac{x_{0}-y}{t}\right) +g\left(y\right) \right\} = 0. This means that any minimum \hat{y} must satisfy L^{\prime }\left(\frac{x_{0}-\hat{y}}{t}\right) = g^{\prime}\left(\hat{y}\right) . If there are two minima at different segments, the one corresponding to the lowest value of g^{\prime} will be relevant. To see this, recall from Figure 2 that we start with break points c_{1} < ... < c_{N+1} and c_{1} < 0, \ c_{N+1}>0. These become the slopes for L and, when we define f\left(y; x, t\right) : = tL\left(\frac{x-y}{t}\right) the slopes are -c_{N+1} < 0 on the left up to the last one, -c_{1}>0 on the right. Any minimizer, \hat{y}, of
f\left( y;x, t\right) +g\left( y\right) = tL\left( \frac{x-y}{t}\right) +g\left( y\right) |
on the flat part of f must satisfy f^{\prime}\left(\hat{y}\right) +g^{\prime}\left(\hat{y}\right) = 0. On the last segment, for example we have the requirement
g^{\prime}\left( \hat{y}\right) = -\left( -c_{1}\right) = c_{1}<0. |
For any of the previous segments we obtain g^{\prime}\left(y\right) = c_{j}>c_{1}. Hence if there are minimizers on previous segments, they become irrelevant as soon as we increase x.
The slope of the straight line, f, will be positive, i.e., -c_{1} and the minimum in the illustration above will be just to the left of the minimum of g.
Hence, in the case of a minimum of Type \left(B\right) we see that it is only the minimizers on this rightmost segment that are relevant (except on a set of measure zero). Although there may be infinitely many minimizers on this segment, they all yield the same g^{\prime}\left(\hat{y}_{j}\right) value of c_{k} (where k is the minimum index for which the slope c_{k} corresponds to a minimizer), so we can take the largest of them, y^{\ast} and write w\left(x, t\right) = g^{\prime}\left(y^{\ast}\left(x, t\right) \right) = c_{k}. Thus one has w\left(x, t\right) = c_{k}. This is illustrated in Figure 5(b).
If we have a combination of minimizers of the two types, then the situation is similar. Except on a set of measure zero in x, we need only consider those values for which g^{\prime}\left(\cdot\right) is minimum, the other values cease to become a minimum as x is varied. As discussed above, we only need to consider finitely many of these minimizers.
Estimating \left\vert w^{e}-w\right\vert is simpler in the Type \left(B\right) case since we only have finitely many values for g^{\prime}. When we take the smoothed version, \mathcal{H}^{\varepsilon} yielding the smoothed \mathcal{L}^{\varepsilon} each of these points \hat{y}_{j}\left(x, t\right) is approximated by \hat{y}_{j}^{\varepsilon}\left(x, t\right) that will correspond to the same g^{\prime}. Note that on the flat part (non-vertex) of L, the smoothing in the way that we are doing it does not change the slope. In fact, L and L^{\varepsilon} will be identical except on an interval of order \varepsilon about the vertices.
Once have isolated the minimizer, y^{\ast}, we have that the y_{\varepsilon }^{\ast} is within \varepsilon of y^{\ast}. Previous results then yield the convergence of w^{\varepsilon}\left(x, t\right) to w\left(x, t\right).We will also need the following technical lemma.
Lemma 5.4. For the conservation law with the smoothed flux function \mathcal{H}_{\varepsilon, \delta}, defined A_{\delta, \varepsilon} as the set of minimizers y\left(x, t\right) in the Hopf-Lax formula. Similarly, define A as the set of minimizers for the sharp problem with the piecewise linear flux function H. Then we have
\lim\limits_{\varepsilon\downarrow0}\sup A_{\varepsilon, \delta} = \sup A a.e. | (5.11) |
Theorem 5.5. For given t>0 and for a.e. x, the largest minimizer y^{\ast}\left(x, t\right) of the sharp problem satisfies the identity
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) | (5.12) |
by Theorem 4.9. In particular, this implies that
\lim\limits_{\varepsilon\rightarrow0}w^{\varepsilon}\left( x, t\right) = w\left( x, t\right) | (5.13) |
pointwise a.e.
Proof. Should the set A_{\varepsilon, \delta} consist of a single element, the proof is trivial. Therefore, assume that the minimizer in A_{\varepsilon, \delta} is not unique. We may consider without loss of generality the case of two such minimizers, as the arguments presented here are easily generalized to n such minimizers.
We consider two such minimizers at a point x_{0} for the sharp problem and denote them by y_{i}\left(x_{0}\right) (i = 1, 2). First take the case where they are both Type (A) minimizers. We take the partial derivative with respect to x of the Legendre transform at the minimum point, which is well-defined as y_{1} also depends on x and the minimizer moves as one shifts x. Therefore, one has
g^{\prime}\left( y_{1}\left( x_{0}\right) \right) = \partial_{x}\left\{ tL\left( \frac{x-y_{1}\left( x\right) }{t}\right) +g\left( y_{1}\left( x\right) \right) \right\} _{x = x_{0}} | (5.14) |
and similarly for g^{\prime}\left(y_{2}\left(x_{0}\right) \right). If g^{\prime}\left(y_{1}\left(x_{0}\right) \right) \not = g^{\prime}\left(y_{2}\left(x_{0}\right) \right) , the minimizer with the smaller value when evaluated under g^{\prime} will become irrelevant as x_{0} changes. Therefore, one sees that this case is confined to sets of measure zero and can be ignored.
Consequently, assume
g^{\prime}\left( y_{1}\left( x_{0}\right) \right) = g^{\prime}\left( y_{2}\left( x_{0}\right) \right) . | (5.15) |
We now want to examine y_{1}^{\varepsilon} and y_{2}^{\varepsilon}, the minimizers for the smoothed out version. We have suppressed the parameter \delta (by setting \delta = \varepsilon^{2}) and the time t for notational convenience. We then compute
\partial_{x}\left\{ t\mathcal{L}\left( \frac{x-y_{i}^{\varepsilon}\left( x_{0}\right) }{t}+g\left( y_{i}^{\varepsilon}\left( x_{0}\right) \right) \right) \right\} = g^{\prime}\left( y_{i}^{\varepsilon}\left( x_{0}\right) \right) . | (5.16) |
It is possible that one might have g^{\prime}\left(y_{1}^{\varepsilon }\left(x_{0}\right) \right) < g^{\prime}\left(y_{2}^{\varepsilon}\left(x_{0}\right) \right) (or the reverse inequality) despite having (5.15). However, this would imply that for the \varepsilon case, y_{2}^{\varepsilon}becomes irrelevant due to g\left(y_{2}^{\varepsilon }\right) increasing faster as x is changed, and in this case y_{1}^{\varepsilon} would be left as the largest minimizer. However, g^{\prime} is continuous, so that
\begin{align} g^{\prime}\left( y_{1}^{\varepsilon}\left( x_{0}\right) \right) & \rightarrow g^{\prime}\left( y_{1}\left( x_{0}\right) \right) \nonumber\\ g^{\prime}\left( y_{2}^{\varepsilon}\left( x_{0}\right) \right) & \rightarrow g^{\prime}\left( y_{2}\left( x_{0}\right) \right) . \label{Sec5MinConv}% \end{align} | (5.17) |
Hence, (5.17) implies g^{\prime}\left(y_{1}^{\varepsilon }\left(x_{0}\right) \right) \rightarrow g^{\prime}\left(y_{2}\left(x\right) \right) . Note that this result holds even though one may have y_{1}^{\varepsilon}\not \rightarrow y_{2}.
Thus, if there were more than two Type (A) minimizers, then we would have
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) = \lim\limits_{\varepsilon\rightarrow0}w^{\varepsilon}\left( x, t\right) | (5.18) |
Next, consider the situation with more than one Type (B) minimizer. Should these minimizers occur among points where L^{\prime}\left(\cdot\right) takes different values, the one with the smaller value evaluated at L^{\prime} becomes irrelevant by the same token as for multiple Type (A) minimizers. Therefore, assume without loss of generality that these minimizers both occur along the last segment, i.e., where f has slope -c_{1}
g^{\prime}\left( \hat{y}_{i}\left( x_{0}\right) \right) = c_{1}. | (5.19) |
Then we have
\partial_{x}\min\left\{ {}\right\} = \partial_{x}\left\{ tL\left( \frac{x-y_{i}\left( x\right) }{t}\right) +g\left( y_{i}\left( x\right) \right) \right\} = -c_{1} | (5.20) |
so that all derivatives will be identical.
Next, for each y_{i}we have y_{i}^{\varepsilon} such that \left\vert y_{i}^{\varepsilon}-y_{i}\right\vert < C\varepsilon, so one may write
\partial_{x}\min\limits_{y\in\mathbb{R}}\left\{ tL\left( \frac{x-y}{t}+g\left( y\right) \right) \right\} = -c_{N} = \lim\limits_{\varepsilon\rightarrow0}g^{\prime }\left( y_{\varepsilon}^{\ast}\left( x, t\right) \right) | (5.21) |
where
y_{\varepsilon}^{\ast}: = \arg^{+}\min\limits_{y\in\mathbb{R}}\left\{ t\mathcal{L}% \left( \frac{x-y}{t}\right) +g\left( y\right) \right\} . | (5.22) |
This leaves the one remaining case where there is one minimizer of each type, i.e. one Type (A) minimizer and one Type (B). As in the above cases, one can then assume that g^{\prime} has the same value at both of these minimizers, for if not, the minimizer with the greater value of g^{\prime} would cease to be relevant, so that the set of such x for fixed t where this occurs is of measure zero. In a similar fashion to the above cases, one has (5.13). Together with Lemma 5.4, this completes the proof.
In the preceding sections, we have shown that w\left(x, t\right) = g'\left(y^{\ast}\left(x, t\right) \right) is a solution to the conservation law (1.1). In this section we establish a criterion under which it is the only solution by characterizing w as the unique solution constructed from the limit of the functions w^{\varepsilon} as \varepsilon\downarrow0, which are unique provided g^{\prime} is continuous. This approach is reminiscent of the well-known vanishing viscosity limit for Burgers' equation.
Definition 6.1. Let H\in C^{0}\left(\mathbb{R}\right) and suppose further that H is differentiable a.e. We say w\left(x, t\right) is a limiting mollified solution to the initial value problem for the conservation law for the flux function H if
(i) There exist smooth \mathcal{H}_{j} that converge uniformly on compact sets to H.
(ii) The solutions w_{j} for the conservation law with flux \mathcal{H}% _{j} converge, for each t>0, to w a.e. in x.
(iii) For any other sequence \mathcal{\tilde{H}}_{j} and solutions \tilde{w}_{j} satisfying (i) and (ii), we have
\lim\limits_{j\rightarrow\infty}\tilde{w}_{j} = \lim\limits_{j\rightarrow\infty}w_{j} = w. | (6.1) |
Remark 6.2. For the case of a polygonal flux H with break points \left\{ c_{i}\right\} _{i = 1}^{n}, clearly H\in C^{\infty}\left(\mathbb{R}% \backslash\left\{ c_{i}\right\} _{i = 1}^{n}\right) and is continuous on all of \mathbb{R}. Indeed, we can show rigorously that this case satisfies Definition 6.1.
Theorem 6.3. Let g^{\prime} be continuous, H a polygonal flux function, and w be the solution of the corresponding conservation law. Then
w\left( x, t\right) = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) | (6.2) |
is the unique limiting mollified solution satisfying w\left(x, 0\right) = g^{\prime}\left(x\right) .
Proof. Since w_{j} are weak solutions and w_{j}\rightarrow w for any sequence w_{j} (as shown in Section 5), the result follows.
In earlier sections, when considering the piecewise linear flux function H, we chose initial conditions that were smooth. An important version of this problem deals with initial conditions g^{\prime} that are not smooth, but instead piecewise constant, as is treated in [13]. The values of these constants are taken as a subset of the break points \left\{ c_{i}\right\} _{i = 1}^{N} of H. Consequently, as one can see from Figure 2, the values of the initial condition match the slopes of the Legendre transform L of the flux function. Furthermore, direct computation verifies that the range of the solution w\left(x, t\right) will also have range \left\{ c_{i}\right\} _{i = 1}^{N}.
The analysis of the minimizers is similar to those of the previous sections, except from the fact that we have an additional type of minimizer, Type (C) in which the vertices of f\left(y; x, t\right) : = L\left(\frac{x-y}{t}\right) and g^{\prime} coincide For such a minimum, the derivative g^{\prime }\left(\hat{y}\left(x, t\right) \right) does not exist as in general the limits from the left and right do not agree. However, there are only finitely many vertices of L, and hence there are at most a finite number of Type (C) vertices for a fixed t.
For Type (A) and (B) minimizers, we proceed in the same way, including the smoothing. Although the Type (B) minimum now occurs at a vertex of g, the analysis of the x-derivative yields the same result. Note that in this case one also needs to mollify g^{\prime}, yielding the following results.
Theorem 7.1. Let L be polygonal convex and g^{\prime} be piecewise constant. Then
\begin{align} w\left( x, t\right) & = g^{\prime}\left( y^{\ast}\left( x, t\right) \right) \; when \;the\; minimizer \;y^{\ast} \;is \;at \;a \;vertex \;of\; L\nonumber\\ & = L^{\prime}\left( y^{\ast})x, t\right) \; when \;the\; minimizer \;y^{\ast } \;is \;at\; a \;vertex \;of \;g \label{Thm7.1cases} \end{align} | (7.1) |
a.e. in x (for fixed t>0) is a solution to (1.1). Note that for a fixed t>0 and x a.e., one of the cases in (7.1) occurs. Furthermore
w^{\varepsilon}\left( x, t\right) = g_{\varepsilon}^{\prime}\left( y_{\varepsilon}^{\ast}\left( x, t\right) \right) \rightarrow w\left( x, t\right) \text{ a.e. in }x | (7.2) |
Note that one can apply the limiting mollified uniqueness concept in the same manner as earlier.
Proof. To obtain the result w^{\varepsilon }\rightarrow w, we observe that if h: = g^{\prime} is piecewise constant, then g\left(y\right) : = \int_{0}^{y}h\left(s\right) ds is Lipschitz with Lip\left(g\right) = \max_{i}\left\{ \left\vert c_{i}\right\vert \right\} , and differentiable a.e by Rademacher's Theorem.
Note that the only subtlety is for Type (C) in which the minimizer of tL^{\varepsilon}\left(\frac{x-y}{t}+g_{\varepsilon}\left(y\right) \right) may be on one segment of g for which the minimizer tL\left(\frac{x-y}{t}\right) +g\left(y\right) is on the adjacent one. But this is an issue that is of measure 0 in x for a given t.
When restricting the values of the initial conditions to the break points of H, we obtain the following more specific result.
Corollary 1. If Lis polygonal convex with break points \left\{ c_{i}\right\} _{i = 1}^{N} and the range of g is contained in \left\{ c_{i}\right\} , then the solution w\left(x, t\right) takes on values only in \left\{ c_{i}\right\}.
Proof. The minimizers of tL\left(\frac{x-y}% {t}\right) +g\left(y\right) will consist of the vertices of g and tL\left(\frac{x-y}{t}\right) exclusively. If \hat{y} is a Type (A) minimizer (i.e. vertex of L but on the differentiable portion of g), then
\partial_{x}\left\{ tL\left( \frac{x-\hat{y}\left( x, t\right) }{t}\right) +g\left( \hat{y}\left( x, t\right) \right) \right\} = g^{\prime}\left( \hat{y}\left( x, t\right) \right) | (7.3) |
as before. If it is of Type (B), i.e. \hat{y} is at a vertex of tL\left(\frac{x-y}{t}\right) , then \partial_{x}\left\{...\right\} = L^{\prime }\left(\frac{x-y}{t}\right) . In both cases, \partial_{x}\left\{...\right\} \in\left\{ c_{i}\right\} . Hence, this is an alternative proof of [13], p. 74.
In this paper, we have shown a number of important extensions to classical results. In the classical Lax-Oleinik theory, more restrictive assumptions such as C^{2} smoothness and uniform convexity of the flux function are required. In many of our results, we have proven rigorous theorems with only a C^{0}, (non-strictly) convex flux function H. This is particularly significant as it facilitates an understanding of the behavior introduced by sharp corners, i.e. at points where the flux function fails to have a derivative in the classical (non-weak) sense and is nowhere strictly convex.
In fact, when the assumptions mentioned above are relaxed, the uniqueness of the minimizers does not, in general, persist. Indeed, there is the potential to have the minimum achieved at an infinite, even uncountable number of points. However, we have shown that this difficulty can be addressed by considering the greatest of these minimizers y^{\ast}\left(x, t\right) , or supremum in the case of an infinite number. We have shown that the solution is described by w\left(x, t\right) = g^{\prime}\left(y^{\ast}\left(x, t\right) \right) , so we have effectively substituted the requirement for uniqueness of the minimizer with the behavior of a specific, well-defined element of the set of minimizers after analyzing the relative change of the Hopf-Lax functional at each of these points. The results have immediate application to conservation laws subject to stochastic processes. For example, if the initial condition g^{\prime}\left(x\right) , is assumed to be Brownian motion, then the solution at time t>0 is given by w\left(x, t\right) = g^{\prime}\left(y^{\ast}\left(x, t\right) \right) \, . In the case of Brownian motion [1,2,24] with fixed value 0 at x = 0, one obtains that the mean and variance at t are 0 and y^{\ast}\left(x, t\right) , respectively.
For each t>0, we know that y^{\ast}\left(x, t\right) is an increasing function of x from Theorem 4.5. Since the variance of Brownian motion also increases as \left\vert x\right\vert increases, we obtain the result that the increase in variance persists for all time.
This is an example of the application of these results to random initial conditions. The methodology can also provide a powerful computational tool. Computing solutions of shocks from conservation laws is a complicated task even when the initial data are regular. When one has random initial data, e.g. Brownian motion (or even less regular randomness), the difficulties are compounded.
The results we have obtained suggest a computational method that amounts to determining the minimum for the function tL\left(\frac{x-y}{t}\right) +g\left(y\right) . In this expression, the first term can be regarded as a deterministic slope while the second is an integrated Brownian motion that can easily be approximated by a discrete stochastic process. In this way one can obtain the probabilistic features of the solution w\left(x, t\right) without tracking and maintaining the shock statistics. In a future paper, we plan to address in detail the application of these results to an array of stochastic processes.
The author declare no conflicts of interest in this paper.
[1] | G. H. Hardy, J. E. Littlewood, G. P\acute{o}lya, Inequalities, 2 Eds., Cambridge: Cambridge University Press, 1967. |
[2] | D. S. Mintrinovic, J. E. Pecaric, A. M. Kink, Inequalities involving functions and their integrals and derivertives, Dordrecht: Springer, 1991. https://doi.org/10.1007/978-94-011-3562-7 |
[3] | B. C. Yang, The norm of operator and Hilbert-type inequalities, (Chinese), Science Press, 2009. |
[4] | T. Batbold, M. Krni\acute{c}, J. Pe\check{c}ari\acute{c}, P. Vukovi\acute{c}, Further development of Hilbert-type inequalities, Zagreb: Element, 2017. |
[5] | B. C. Yang, M. T. Rassias, On Hilbert-type and Hardy-type integral inequalities and applications, Switzerland: Springer, 2019. https://doi.org/10.1007/978-3-030-29268-3 |
[6] | Y. Hong, B. He, Theory and applications of Hilbert-type inqualities, (Chinese), Beijing: Science Press, 2023. |
[7] |
Q. Liu, W. B. Sun, A Hilbert-type integral inequality with the mixed kernel of multi-parameters, C. R. Math., 351 (2013), 605–611. https://doi.org/10.1016/j.crma.2013.09.001 doi: 10.1016/j.crma.2013.09.001
![]() |
[8] |
Q. Liu, A Hilbert-type integral inequality under configuring free power and its applications, J. Inequal. Appl., 2019 (2019), 91. https://doi.org/10.1186/s13660-019-2039-1 doi: 10.1186/s13660-019-2039-1
![]() |
[9] |
Q. Liu, On a mixed Kernel Hilbert-type integral inequality and its operator expressions with norm, Math. Method. Appl. Sci., 44 (2021), 593–604. https://doi.org/10.1002/mma.6766 doi: 10.1002/mma.6766
![]() |
[10] |
M. T. Rassias, B. C. Yang, A. Raigorodskii, On a more accurate reverse Hilbert-type inequality in the whole plane, J. Math. Inequal., 14 (2020), 1359–1374. https://doi.org/10.7153/jmi-2020-14-88 doi: 10.7153/jmi-2020-14-88
![]() |
[11] |
X. S. Huang, B. C. Yang, C. M. Huang, On a reverse Hardy-Hilbert-type integral inequality involving derivative function of higher order, J. Inequal. Appl., 2023 (2023), 60. https://doi.org/10.1186/s13660-023-02971-9 doi: 10.1186/s13660-023-02971-9
![]() |
[12] |
B. C. Yang, M. T. Rassias, A new Hardy-Hilbert-type integral inequality involving one mulitiple upper limit function and one derivative function of higher order, Axioms, 12 (2023), 499. https://doi.org/10.3390/axioms12050499 doi: 10.3390/axioms12050499
![]() |
[13] |
Y. Hong, Q. Chen, Equivalence condition for the best matching parameters of multiple integral operator with generalized homogeneous kernel and applications, (Chinese), Scientia Sinica Mathematica, 53 (2023), 717–728. https://doi.org/10.1360/SSM-2021-0149 doi: 10.1360/SSM-2021-0149
![]() |
[14] |
Y. Hong, Y. R. Zong, B. C. Yang, A more accurate half-discrete multidimension Hilbert-type inequality involving one multiple upper limit function, Axioms, 12 (2023), 211. https://doi.org/10.3390/axioms12020211 doi: 10.3390/axioms12020211
![]() |
[15] |
X. Y. Huang, S. H. Wu, B. C. Yang, A Hardy-Hilbert-type inequality involving modified coefficients and partial sums, AIMS Math., 7 (2022), 6294–6310. https://doi.org/10.3934/math.2022350 doi: 10.3934/math.2022350
![]() |
[16] | X. J. Yang, Local fractional functional analysis and its applications, Hong Kong: Asian Academic Publisher Limited, 2011. |
[17] | X. J. Yang, Advanced local fractional calculus and its applications, New York: World Science Publisher, 2012. |
[18] | X. J. Yang, D. Baleanu, H. M. Srivastava, Local fractional integral transforms and their applications, New York: Academic Press, 2015. https://doi.org/10.1016/C2014-0-04768-5 |
[19] |
G.-S. Chen, H. M. Srivastava, P. Wang, W. Wei, Some further generalizations of H\ddot{o}lder's inequality and related results on fractal space, Abstr. Appl. Anal., 2014 (2014), 832802. https://doi.org/10.1155/2014/832802 doi: 10.1155/2014/832802
![]() |
[20] |
X. J. Yang, F. Gao, H. M. Srivastava, Exact traveling wave solutions for the local fractional two-dimensional Burgers-type equations, Comput. Math. Appl., 73 (2017), 203–210. https://doi.org/10.1016/j.camwa.2016.11.012 doi: 10.1016/j.camwa.2016.11.012
![]() |
[21] |
Y. Y. Feng, X. J. Yang, J. G. Liu, Z. Q. Chen, New perspective aimed at local fractional order memristor model on cantor sets, Fractals, 29 (2021), 21500011. https://doi.org/10.1142/S0218348X21500110 doi: 10.1142/S0218348X21500110
![]() |
[22] |
X. J. Yang, L. L. Geng, Y. R. Fan, New special functions applied to represent the weierstrass-nandelbrot function, Fractals, 32 (2024), 2340113. https://doi.org/10.1142/S0218348X23401138 doi: 10.1142/S0218348X23401138
![]() |
[23] |
Q. Liu, W. B. Sun, A Hilbert-type fractal integral inequality and its applications, J. Inequal. Appl., 2017 (2017), 83. https://doi.org/10.1186/s13660-017-1360-9 doi: 10.1186/s13660-017-1360-9
![]() |
[24] |
Q. Liu, D. Z. Chen, A Hilbert-type integral inequality on the fractal space, Integr. Trans. Spec. Funct., 28 (2017), 772–780. https://doi.org/10.1080/10652469.2017.1359588 doi: 10.1080/10652469.2017.1359588
![]() |
[25] |
Y. D. Liu, Q. Liu, Generalization of Yang-Hardy-Hilbert's integral inequality on the fractal set \mathbb{R}_{+}^{\alpha }, Fractals, 30 (2021), 22500177. https://doi.org/10.1142/S0218348X22500177 doi: 10.1142/S0218348X22500177
![]() |
[26] |
Q. Liu, A Hilbert-type fractional integral inequality with the kernel of Mittag-Leffler function and its applications, Math. Inequal. Appl., 21 (2018), 729–737. https://doi.org/10.7153/mia-2018-21-52 doi: 10.7153/mia-2018-21-52
![]() |
[27] |
Y. D. Liu, Q. Liu, A Hilbert-type local fractional integral inequality with the kernel of a hyperbolic cosecant function, Fractals, 32 (2024), 24400280. https://doi.org/10.1142/S0218348X24400280 doi: 10.1142/S0218348X24400280
![]() |
[28] |
Y. D. Liu, Q. Liu, The structural features of Hilbert-type local fractional integral inequalities with abstract homogeneous kernel and its applications, Fractals, 28 (2020), 2050111. https://doi.org/10.1142/S0218348X2050111X doi: 10.1142/S0218348X2050111X
![]() |
[29] |
D. Baleanu, M. Krni\acute{c}, P. Vukovi\acute{c}, A class of fractal Hilbert-type inequalities obtained via Cantor-type spherical coordinates, Math. Method. Appl. Sci., 44 (2021), 6195–6208. https://doi.org/10.1002/mma.7180 doi: 10.1002/mma.7180
![]() |
[30] |
P. Vukovi\acute{c}, Some local fractional Hilbert-type inequalities, Fractal Fract., 7 (2023), 205. https://doi.org/10.3390/fractalfract7020205 doi: 10.3390/fractalfract7020205
![]() |
[31] |
T. Batbold, M. Krni\acute{c}, P. Vukovi\acute{c}, A unified approach to fractal Hilbert-type inequalities, J. Inequal. Appl., 2019 (2019), 117. https://doi.org/10.1186/s13660-019-2076-9 doi: 10.1186/s13660-019-2076-9
![]() |
[32] | Z. S. Huang, D. R. Guo, An introduction to special function, (Chinese), Beijing: Beijing University Press, 2000. |
[33] | J. C. Kuang, Introduction to real analysis, (Chinese), Changsha: Hunan Education Press, 2000. |
1. | Weiping Wei, Youlin Shang, Hongwei Jiao, Pujun Jia, Shock stability of a novel flux splitting scheme, 2024, 9, 2473-6988, 7511, 10.3934/math.2024364 | |
2. | Aníbal Coronel, Alex Tello, Fernando Huancas, A characterization of the reachable profiles of entropy solutions for the elementary wave interaction problem of convex scalar conservation laws, 2025, 10, 2473-6988, 3124, 10.3934/math.2025145 |