A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).
Citation: Qingjie Tan, Xujun Che, Shuhui Wu, Yaguan Qian, Yuanhong Tao. Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling[J]. Electronic Research Archive, 2023, 31(11): 7021-7039. doi: 10.3934/era.2023356
[1] | J. O. Akanni, S. Ajao, S. F. Abimbade, Fatmawati . Dynamical analysis of COVID-19 and tuberculosis co-infection using mathematical modelling approach. Mathematical Modelling and Control, 2024, 4(2): 208-229. doi: 10.3934/mmc.2024018 |
[2] | Yaxin Ren, Yakui Xue . Modeling and optimal control of COVID-19 and malaria co-infection based on vaccination. Mathematical Modelling and Control, 2024, 4(3): 316-335. doi: 10.3934/mmc.2024026 |
[3] | Ihtisham Ul Haq, Nigar Ali, Shabir Ahmad . A fractional mathematical model for COVID-19 outbreak transmission dynamics with the impact of isolation and social distancing. Mathematical Modelling and Control, 2022, 2(4): 228-242. doi: 10.3934/mmc.2022022 |
[4] | Paride O. Lolika, Mlyashimbi Helikumi . Global stability analysis of a COVID-19 epidemic model with incubation delay. Mathematical Modelling and Control, 2023, 3(1): 23-38. doi: 10.3934/mmc.2023003 |
[5] | Ankur Jyoti Kashyap, Arnab Jyoti Bordoloi, Fanitsha Mohan, Anuradha Devi . Dynamical analysis of an anthrax disease model in animals with nonlinear transmission rate. Mathematical Modelling and Control, 2023, 3(4): 370-386. doi: 10.3934/mmc.2023030 |
[6] | Anita T. Kurniawati, Fatmawati, Chidozie W. Chukwu, Windarto, Faishal F. Herdicho . Optimal control of dengue fever model with a logistically growing human population. Mathematical Modelling and Control, 2025, 5(1): 48-60. doi: 10.3934/mmc.2025004 |
[7] | Salma Al-Tuwairqi, Asma Badrah . A qualitative analysis of a model on alpha-synuclein transport and aggregation in neurons. Mathematical Modelling and Control, 2023, 3(2): 104-115. doi: 10.3934/mmc.2023010 |
[8] | Monica Veronica Crankson, Olusegun Olotu, Ayodeji Sunday Afolabi, Afeez Abidemi . Modeling the vaccination control of bacterial meningitis transmission dynamics: a case study. Mathematical Modelling and Control, 2023, 3(4): 416-434. doi: 10.3934/mmc.2023033 |
[9] | Abdul-Fatawu O. Ayembillah, Baba Seidu, C. S. Bornaa . Mathematical modeling of the dynamics of maize streak virus disease (MSVD). Mathematical Modelling and Control, 2022, 2(4): 153-164. doi: 10.3934/mmc.2022016 |
[10] | Eminugroho Ratna Sari, Nikken Prima Puspita, R. N. Farah . Prevention of dengue virus transmission: insights from host-vector mathematical model. Mathematical Modelling and Control, 2025, 5(2): 131-146. doi: 10.3934/mmc.2025010 |
A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).
The classical isocapacitary inequality states that among sets which share the same amount of Lebesgue measure, balls minimize the electrostatic (Newtonian) capacity, that is, for any measurable set Ω with finite measure, the following scale invariant inequality holds true
|Ω|(2−n)/ncap(Ω)≥|B|(2−n)/ncap(B). | (1.1) |
Here |⋅| stands for the n−dimensional Lebesgue measure, n≥3, B is any ball in Rn and cap(⋅) is the standard electrostatic capacity in Rn, defined for compact sets as
cap(Ω):=inf{∫Rn|∇u|2dx:u∈C∞c(Rn),u≥1in Ω}. | (1.2) |
We observe that (1.1) can be rephrased in terms of the isocapacitary deficit, by saying that
dcap(Ω):=|Ω|(2−n)/ncap(Ω)|B|(2−n)/ncap(B)−1≥0. | (1.3) |
It is well known that the isocapacitary inequality is rigid, in the sense that dcap(Ω) vanishes if and only if Ω is equivalent to a ball up to a set of null Lebesgue measure. Thus, it appears as a natural quest the attempt of obtaining a quantitative stability version of (1.3). There are several possible geometric quantities that can properly measure the difference between a generic set and a ball with the same volume. The most natural one is the so-called Fraenkel asymmetry, first proposed by L. E. Fraenkel given by
A(Ω)=inf{|ΩΔB||Ω|:B is a ball with |B|=|Ω|}. |
The first attempts in this direction were made in the '90s. In particular, in [23] stability inequalities of the form
dcap(Ω)≥CnA(Ω)n+1 |
were proved, restricting to the class of convex sets when n≥3*. Nevertheless, in [23] the optimal exponent was conjectured to be 2, that is
dcap(Ω)≥C′nA(Ω)2, | (1.4) |
*In the planar case the suitable capacity is the logarithmic capacity.
which is asymptotically sharp for small asymmetries. Inequality (1.4) was proved in the planar case in [25,Corollary 2] (see also [2] and [24] for related results with other notions of deficiencies). As far as higher dimensions are concerned, (1.4) was proved by Fraenkel in [18] for starshaped sets, while in [21] the authors provided the inequality (1.4) with a suboptimal exponent but for general sets, i.e.,
dcap(Ω)≥C″nA(Ω)4. | (1.5) |
The conjecture in its full generality was finally established in [29]. It is worth stressing that to get this result the authors need to exploit the suboptimal inequality (1.5). We finally mention [28], where the author treated the case of the p-capacity and proved the corresponding sharp inequality. We point out that the approach followed in [28,29], while leading to the sharp exponent 2, does not allow to work out the explicit constant which multiplies the asymmetry. On the contrary, inequality (1.5) is not sharp for small values of A(Ω) but comes with an explicit constant C″n>0.
In this work we tackle the problem of quantification of the isocapacitary inequality in the fractional framework.
Let s∈(0,1) and let n>2s. We consider the fractional generalization of the capacity, defined for compact sets as follows
caps(Ω)=inf{[u]2s:u∈C∞c(Rn),u≥1on Ω}, | (1.6) |
where
[u]s:=(∫R2n|u(x)−u(y)|2|x−y|n+2sdxdy)12 |
denotes the fractional Gagliardo seminorm of order s. The definition of fractional capacity of a general closed set Ω⊆Rn is given in Definition 2.1, which can be easily proved to be equivalent to (1.6) when Ω is compact. As a straightforward consequence of the fractional analogue of the Pólya-Szegö inequality (proved in [1,Theorem 9.2], see also Proposition 2.10 below for an "extended" version) one can easily derive the fractional isocapacitary inequality, stating that
|Ω|(2s−n)/ncaps(Ω)≥|B|(2s−n)/ncaps(B), | (1.7) |
for any closed Ω⊆Rn with finite measure and for any closed ball B. The aim of this work is to quantify the fractional isocapacitary deficit
dcaps(Ω):=|Ω|(2s−n)/ncaps(Ω)|B|(2s−n)/ncaps(B)−1 |
in terms of the asymmetry of Ω. We point out that, in view of the scaling properties of the fractional capacity, the term
|B|(2s−n)/ncaps(B) |
is a universal constant, not depending on the choice of the ball B. It is also worth remarking that caps(⋅) can be defined, through (1.6), on open sets O and its value coincide with caps(¯O).
The fractional Pölya-Szegö inequality entails the rigidity of inequality (1.7), in the sense that the equality holds if and only if Ω is a ball in Rn, see [19,Theorem A.1]. We now present our main result, which amounts to a quantitative stability inequality for the fractional capacity.
Theorem 1.1. Let s∈(0,1) and n>2s. There exists a constant Cn,s>0, depending only on n and s, such that for any closed set Ω⊂Rn with finite measure, there holds
dcaps(Ω)=|Ω|(2s−n)/ncaps(Ω)|B|(2s−n)/ncaps(B)−1≥Cn,sA3s(Ω). | (1.8) |
Moreover the constant Cn,s can be explicitly computed, see Remark 3.5.
Our second result investigates the asymptotic behaviour of the function s↦caps(Ω) when s→1−, for a compact set Ω⊆Rn. In particular, we obtain that a suitable normalization of caps behaves like the standard capacity as s→1− (see (4.1) for the precise definition of the classical notion of capacity).
Proposition 1.2. Let n≥3, then for every Ω⊂Rn compact set, we have
lim sups↗1(1−s)caps(Ω)≤ωn2cap(Ω), | (1.9) |
where ωn:=|B1| and B1 denotes the unitary ball in Rn. If in addition Ω is the closure of an open bounded set with Lipschitz boundary then
lims↗1(1−s)caps(Ω)=ωn2cap(Ω). | (1.10) |
We observe that the exponent 3/s appearing in (1.8) is likely not sharp, †. Nevertheless, since the constant Cn,s in (1.8) can be explicitly computed (see Remark 3.5), together with its limit as s→1− (see Remark 4.1), our result entails an improvement of (1.5), asymptotically as s→1−. In particular, thanks also to Proposition 1.2 and Lemma 3.3, we are able to state the following.
†The optimal exponent was conjectured to be 2 in the fractional case as well.
Corollary 1.3. Let n≥3 and Ω⊆Rn be a closed set with finite measure. Then (1.8) is stable as s→1− and there holds
dcap(Ω)≥CnA(Ω)3, |
for some Cn>0 depending only on n, whose explicit value can be found in Remark 4.1.
Our proof is inspired by that in [3] where the authors deal with the (non-sharp) quantitative stability of the first eigenvalue of the fractional Laplacian with homogeneous Dirichlet exterior conditions. Such a result, in turn, relies on ideas established in [2,21]. Here, we provide a sketch of these arguments, starting with the classical case and then trying to emphasize the differences occurring in the fractional framework.
It is well known that, for any closed Ω⊆Rn with finite measure, there exists a unique function 0≤uΩ≤1, belonging to a suitable functional space, that achieves cap(Ω). Such a function is called the capacitary potential of Ω. First, by means of the coarea formula one gets
cap(Ω)∼∫Rn|∇uΩ|2dx∼∫10(∫{uΩ=t}|∇uΩ|dHn−1)dt. |
The right-hand side of the latter equality, after some manipulation can be written in terms of the perimeter of the superlevel sets {uΩ≥t}, i.e.,
cap(Ω)∼∫10P({uΩ≥t})2f1(t)dt. |
being f1 a suitably chosen real function depending only on the measure of {uΩ≥t}. The idea is then to exploit the sharp quantitative isoperimetric inequality [13,17,20]
dPer(E)∼|E|(n−1)/nP(E)−|B|(n−1)/nP(B)≳A(E)2, |
holding for any E⊆Rn in a suitable class and for any ball B⊆Rn. Plugging this into the previous estimate, after some further manipulation, one gets
dcap(Ω)≳∫10A({u≥t})2f2(t)dt, |
where f2 is an explicit positive integrable function depending only on the size of the superlevels of uΩ. Then we reason in the spirit of [2]:
heuristically, as long as t is close to ‖uΩ‖L∞(Rn)=1 we expect the set {uΩ≥t} is close to Ω in L1 and that A({uΩ≥t})∼A(Ω), and then, the idea is to seek for a threshold T such that at once
● A({u≥t})∼A(Ω) as long as t∈(T,1) and
● the quantity ∫1Tf2(t)dt results proportional to a power of A(Ω).
The previous two properties lead directly to the sought inequality. A bit more precisely, if the threshold T is such that 1−T≳A(Ω), then the above strategy works, while if 1−T≲A(Ω), then the fact that the asymmetry is large (with respect to 1−T) allows, by a simple comparison argument, to get an asymptotically stronger inequality of the form
dcap(Ω)≳A(Ω). |
In the fractional case the existence of a capacitary potential uΩ is guaranteed as well, see Remark 2.2. However, the arguments described above cannot be directly implemented in the fractional scenario, due to nonlocal effects. Indeed the very first step fails, since a suitable coarea formula for non-integer Sobolev spaces is missing. A way to overcome these difficulties is provided by the so called Caffarelli-Silvestre extension for functions in fractional Sobolev spaces. Loosely speaking, this tool allows us to interpret nonlocal energies of functions defined on Rn as local energies of functions depending on one more variable. Namely, one can prove a characterization of the s-capacity in the fashion of
caps(Ω)∼inf{∫Rn+1+z1−2s|∇U(x,z)|2dxdz:U(x,0)=uΩ(x)}, | (1.11) |
where
Rn+1+:={(x,z):x∈Rn,z>0} |
and U varies in a suitable functional space on Rn+1+. Moreover, one can prove that the infimum in (1.11) is uniquely achieved by a function 0≤UΩ≤1. We refer to Section 2.2 for the precise setting and definitions. At this point, the above strategy may be applied on every horizontal slice {(x,z):x∈Rn} and with UΩ(⋅,z) in place of uΩ. This way, we end up with
dcaps(Ω)≳∫∞0z1−2s∫10A({U(⋅,z)≥t})2fz(t)dtdz |
where, again, fz is an explicit real-valued function depending on the measure of the superlevels of U(⋅,z). Here it appears evident the extra inconvenience due to the presence of the integral in the z−variable. To get rid of this latter problem, we adapt ideas in [3] to show the existence of a good interval (0,z0), for which the (asymmetries of the) superlevels of UΩ(⋅,z) are close to (those of) the superlevels of uΩ, leading to
dcaps(Ω)≳∫z00z1−2s∫10A({UΩ(⋅,z)≥t})2fz(t)dtdz∼∫10A({uΩ≥t})2fz(t)dt |
and hence conclude similarly as in the classical local case.
In this section we introduce some prerequisites that are necessary in order to prove our main result.
First of all, we precisely define the functional setting we work in. For any open set O⊆Rn, we consider the homogeneous fractional Sobolev space Ds,2(O), defined as the completion of C∞c(O) with respect to the Gagliardo seminorm of order s
[u]s=(∫R2n|u(x)−u(y)|2|x−y|n+2sdxdy)12. |
The space Ds,2(O) is an Hilbert space, naturally endowed with the following scalar product
(u,v)Ds,2(Rn):=∫R2n(u(x)−u(y))(v(x)−v(y))|x−y|n+2sdxdy. |
Moreover, by trivial extension we have that Ds,2(O) is continuously embedded in Ds,2(Rn). We refer to [8] and [5] for more details concerning fractional homogeneous spaces and their characterizations. We limit ourselves to recall the fractional Sobolev inequality, which reads as follows:
Sn,s‖u‖2L2∗s(Rn)≤[u]2sfor all u∈Ds,2(Rn), | (2.1) |
where
2∗s:=2nn−2s |
denotes the critical Sobolev exponent in the fractional framework and Sn,s>0 denotes the best constant in the inequality. In particular, this result ensures the continuity of the embedding
Ds,2(Rn)↪L2∗s(Rn) |
and it provides the following characterization
Ds,2(Rn)={u∈L2∗s(Rn):[u]s<∞}. | (2.2) |
We refer to [14,Theorem 1.1] (see also [30,Theorem 7] in the Appendix) and to [5,Theorem 3.1] for the proofs of (2.1) and (2.2), respectively.
We now introduce the definition of fractional capacity of a closed subset of Rn.
Definition 2.1. Let Ω⊆Rn be closed and let ηΩ∈C∞c(Rn) be such that ηΩ=1 in an open neighbourhood of Ω. We define the fractional capacity of order s (or s-capacity) of the set Ω as follows:
caps(Ω):=inf{[u]2s:u∈Ds,2(Rn),u−ηΩ∈Ds,2(Rn∖Ω)}. |
First of all, we point out that the above definition does not depend on the choice of the cut-off function ηΩ. Indeed, if ˜ηΩ∈C∞c(Rn) satisfies ˜ηΩ=1 in a neighbourhood of Ω, then trivially
ηΩ+Ds,2(Rn∖Ω)=˜ηΩ+Ds,2(Rn∖Ω). |
We also observe that, if Ω⊆Rn is a compact set, then, by a simple regularization argument, one can easily prove that
caps(Ω)=inf{[u]2s:u∈C∞c(Rn),u≥1in Ω}. |
We also point out that it is not restrictive to assume that the admissible competitors u in the definition of caps(Ω) satisfy
0≤u≤1,a.e. in Rn, |
since
[u+∧1]s≤[u]sfor all u∈Ds,2(Rn), |
where u+ denotes the positive part of u and a∧b=min{a,b}. We refer to lemmas 2.6 and 2.7 in [32] for the proofs.
Remark 2.2. By direct methods of the calculus of variations, it is easy to check that caps(Ω) is uniquely achieved (when caps(Ω)<∞) by a function u∈Ds,2(Rn) such that u−ηΩ∈Ds,2(Rn∖Ω). Hereafter, we denote such function by uΩ and we call it the s-capacitary potential (or simply the capacitary potential) associated to Ω. Moreover, it is easy to observe that 0≤uΩ≤1 a.e. in Rn and that uΩ satisfies a variational equation, that is
(uΩ,φ)Ds,2(Rn)=0,for all φ∈Ds,2(Rn∖Ω). |
The notion of s-capacity of a set is in relation with the fractional Laplace operator of order s, which is defined, for u∈C∞c(Rn), as follows
(−Δ)su(x):=2P.V.∫Rnu(x)−u(y)|x−y|n+2sdy=2limr→0+∫{|x−y|>r}u(x)−u(y)|x−y|n+2sdy, |
where P.V. means that the integral has to be seen in the principal value sense. It is natural to extend the definition of fractional Laplacian applied to any function in Ds,2(Rn), in a distributional sense. More precisely, given u∈Ds,2(Rn), we have that (−Δ)su∈(Ds,2(Rn))∗ (with (Ds,2(Rn))∗ denoting the dual of Ds,2(Rn)) and it acts as follows
(Ds,2(Rn))∗⟨(−Δ)su,v⟩Ds,2(Rn)=(u,v)Ds,2(Rn),for all v∈Ds,2(Rn). |
Therefore, in view of Remark 2.2, we can say that the capacitary potential uΩ∈Ds,2(Rn) weakly satisfies
{(−Δ)suΩ=0,in Rn∖Ω,uΩ=1,in Ω. |
The proof of our main result strongly relies on an extension procedure for functions in fractional Sobolev spaces, first established in [11], which, in some sense, allows to avoid some nonlocal issues and recover a local framework. Such a procedure is commonly called Caffarelli-Silvestre extension. In this paragraph, we introduce the functional spaces emerging in the extended formulation and we discuss some of their properties, also in relation with the s-capacity of a set. We remark that, being the Caffarelli-Silvestre extension a classical tool nowadays, the results we present here can be regarded as folklore. But still, up to our knowledge, there are no explicit proofs available in the literature, hence we decided to report them here.
For any closed set K⊆∂Rn+1+≃Rn, we define the space D1,2(¯Rn+1+∖K;z1−2s) as the completion of C∞c(¯Rn+1+∖K) with respect to the norm
‖U‖D1,2(¯Rn+1+∖K;z1−2s):=(∫Rn+1+z1−2s|∇U|2dxdz)12. |
However hereafter we simply write D1,2z(¯Rn+1+∖K) in place of D1,2(¯Rn+1+∖K;z1−2s).
We have that D1,2z(¯Rn+1+∖K) is an Hilbert space with respect to the scalar product
(U,V)D1,2z(¯Rn+1+∖K):=∫Rn+1+z1−2s∇U⋅∇Vdxdz. |
First of all we shot that the space D1,2z(¯Rn+1+∖K) is a well defined functional space, which is not obvious. In order to prove that, it is sufficient to prove that it is the case for D1,2z(¯Rn+1+), in view of the continuous embedding
D1,2z(¯Rn+1+∖K)↪D1,2z(¯Rn+1+). |
To show this, we first recall by [15,Proposition 3.3] the following weighted Sobolev inequality
(∫Rn+1+z1−2s|U|2γdxdz)12γ≤S′n,s(∫Rn+1+z1−2s|∇U|2dxdz)12for all U∈D1,2z(¯Rn+1+), | (2.3) |
where S′n,s is a positive constant and γ:=1+2n−2s.
In particular this inequality yields the following continuous embedding
D1,2z(¯Rn+1+)↪L2γ(Rn+1+;z1−2s), |
where
L2γ(Rn+1+;z1−2s):={U∈L1loc(Rn+1+):∫Rn+1+z1−2s|U|2γdxdz<∞}. |
We now provide a characterization of D1,2z(¯Rn+1+) as a concrete functional space.
Proposition 2.3. The space D1,2z(¯Rn+1+) is a functional space. In particular there holds
D1,2z(¯Rn+1+)={U∈L2γ(Rn+1+;z1−2s):‖U‖D1,2z(¯Rn+1+)<+∞}. |
Proof. The fact that
D1,2z(¯Rn+1+)⊆{U∈L2γ(Rn+1+;z1−2s):‖U‖D1,2z(¯Rn+1+)<+∞} |
immediately follows from (2.3). We now prove the reverse inclusion. Namely, we show that any function
U∈L2γ(Rn+1+;z1−2s) | (2.4) |
such that
‖U‖D1,2z(¯Rn+1+)<+∞ | (2.5) |
can be approximated by functions in C∞c(¯Rn+1+) in the topology induced by the norm ‖⋅‖D1,2z(¯Rn+1+). First, suppose that U is compactly supported in ¯Rn+1+ and let
˜U(x,z):={U(x,z),if z>0,U(x,−z),if z<0. |
Moreover, we let {ρε}ε>0 be a family of mollifiers‡ in Rn+1 and we set
Uε=˜U⋆ρε|¯Rn+1+. |
‡We call mollifier a smooth, symmetric decreasing, positive and compactly supported function, which converges in distribution to a centered Dirac measure δ, as ε→0, and such that ‖ρε‖L1(Rn+1)=1.
Clearly Uε pointwisely converge to U in ¯Rn+1, as ε→0. Moreover it is equibounded in D1,2z(¯Rn+1+). Thus we easily conclude by means of the dominated convergence theorem. We consider now the general case. Fix ε>0 and let UR=UηR where ηR is the restriction to ¯Rn+1+ of a radial, smooth cut-off function defined on Rn+1 such that ηR=1 on BR, ηR=0 on Rn+1∖B2R and supRn+1|∇ηR|≤4R−1. Since, UR∈D1,2z(¯Rn+1+), by the previous step there exists VR∈C∞c(¯Rn+1+) such that ‖UR−VR‖D1,2z(¯Rn+1+)≤ε/2, so that, by triangular inequality
‖U−VR‖D1,2z(¯Rn+1+)≤‖U−UR‖D1,2z(¯Rn+1+)+ε2. |
We are left to show that UR→U in D1,2z(¯Rn+1+), as R→∞. In view of the properties of ηR, we have that
‖U−UR‖2D1,2z(¯Rn+1+)=∫Rn+1+z1−2s|∇U−∇(UηR)|2dxdz≤2∫Rn+1+z1−2s|∇U−ηR∇U|2dxdz+2∫Rn+1+z1−2s|U∇ηR|2dxdz≤2∫Rn+1+∖B+Rz1−2s|∇U|2dxdz+32R2∫B+2R∖B+Rz1−2s|U|2dxdz, |
where B+r:=Br∩Rn+1+. Thanks to (2.5), the first term on the right-hand side in the above inequalities is infinitesimal as R tends to infinity, so it can be chosen smaller than ε/4. For what concerns the second term, by Hölder inequality we obtain that
∫B+2R∖B+Rz1−2s|U|2dxdz≤(∫B+2R∖B+Rz1−2sdxdz)(γ−1)/γ(∫B+2R∖B+Rz1−2s|U|2γdxdz)1/γ. |
By (2.4) we have that
∫B+2R∖B+Rz1−2s|U|2γdxdz→0,as R→∞, |
while, by an explicit computation, also recalling that γ=1+nn−2s one gets that
supR≥11R2(∫B+2R∖B+Rz1−2sdxdz)(γ−1)/γ<+∞. |
Hence we can choose R large enough so that
16R2∫B+2R∖B+Rz1−2sU2dxdz≤ε/4, |
and conclude that
‖U−VR‖D1,2z(¯Rn+1+)≤ε. |
Another fundamental fact that relates the space D1,2z(¯Rn+1+) with Ds,2(Rn) is the existence of a trace map from the former to the latter. Before stating the precise result, we recall the following classical Hardy inequality, whose proof can be found e.g., in [26].
Lemma 2.4. Let p∈(1,∞) and a<1. Then there exists a constant C(p,a)>0 such that
∫∞0ρa|1ρ∫ρ0f(t)dt|pdρ≤C(p,a)∫∞0ρa|f(ρ)|pdρ, |
for all f∈C∞c([0,∞)).
Proposition 2.5. There exists a linear and continuous trace operator
Tr:D1,2z(¯Rn+1+)→Ds,2(Rn) |
such that Tr(U)(x)=U(x,0) for every U∈C∞c(¯Rn+1+).
Proof. Throughout the proof, we assume the space Ds,2(Rn) to be endowed with the following norm
[u]s,#:=(n∑i=1[u]2s,i)12. |
where
with denoting the unit vector in the positive variable. The norm is equivalent to , as proved in [3,Proposition B.1]. By density, it is sufficient to prove that there exists such that
(2.6) |
For , and , we rewrite
Therefore
If we integrate in the variable we obtain
where we used the fact that
for the first term and a change of variable for the second. By integration with respect to in and thanks to Lemma 2.4 (choosing and ) we infer
for some constant depending only on . If we now sum for and take the square root, we obtain (2.6), thus concluding the proof.
The next step is to prove that the trace map introduced in the proposition above is onto. We first introduce the Poisson kernel of the upper half-space , defined as
where
(2.7) |
is given in such a way that
see [22,Remark 2.2]. Essentially, a convolution with this kernel allows to extend to the upper half-space functions that are defined on . This is done first for functions in and then extended by density to the whole . Namely, we have the following.
Proposition 2.6. Let . Then the function
(2.8) |
belongs to and
(2.9) |
where
(2.10) |
In particular the trace operator established in Proposition 2.5 is surjective.
Proof. For any , we let
It is easy to check that
(2.11) |
For the computation of the explicit constant see e.g., [10,Remark 3.11]. Moreover, by the weighted Sobolev inequality (2.3)
Hence, by Proposition 2.3, we have that . Therefore the map
is linear and continuous, thus it can be uniquely extended in the whole and (2.11) still holds. We now prove that for any . Let and be such that in as . Thanks to (2.1), we have that
which, by definition, implies that
On the other hand in as ; hence, up to a subsequence
Therefore and the proof is complete.
The following corollary, in view of the previous results, tells us that there is an isometry between the space and the subspace of containing the (unique) minimizers of a certain functional.
Corollary 2.7. Let . Then the minimization problem
admits a unique solution, which coincides with as in . Moreover
in a weak sense, that is
where is as in .
Proof. The proof is standard. For instance see [3,Proposition 2.6] for the first part and [11] for the second.
Remark 2.8. In view of the extension procedure described above, we can relate the notion of -capacity with another notion of (weighted) capacity of sets in . More precisely, with a slight abuse of notation, we call the completion of with respect to the norm
and, for any closed , we let
where is equal to in a neighborhood of . Thanks to (2.9), after an even reflection in the variable, one can see that
for any closed . Moreover the unique function achieving coincides with the even-in- reflection of , with denoting the capacitary potential of , as in Remark 2.2.
Hereafter we denote by
the restriction to the upper half-space of the potential associated to . Hence, satisfies a.e. in and weakly solves
in the sense that and
Thanks to [31,Theorem 1.1], we can observe that . Hence, in particular .
Remark 2.9. We emphasize a technical difference with respect to [3]. Namely that the functional setting we adopt here is tailored for the problem under investigation. Indeed, being the -capacity obtained by minimization of the (nonlocal) energy, it is natural to expect the minimizer to have only finite seminorm , together with the possibility of approximating it by means of smooth and compactly supported functions, at least for compact . Therefore, any other integrability assumption on the capacitary potential appears as artificial. Observe that, differently from our Proposition 2.6 and Corollary 2.7, in which the trace function needs just to belong to , in the analogue extension result Proposition 2.6 of [3] an additional integrability assumption on is required.
This paragraph is devoted to the isocapacitary inequality (1.7). The classical and simplest proof of the (standard) isocapacitary inequality,
(2.12) |
is by rearrangement: given any function , its symmetric decreasing rearrangement is the radial decreasing function such that . As a consequence of the Pólya-Szegö inequality
applied to the capacitary potential of a closed one can easily see that as long as , from which, by scaling (2.12) holds as well. Indeed, the symmetric rearrangement of the capacitary potential of coincides with the potential of a ball with the same volume as . Following this path, one can prove the fractional isocapacitary inequality using symmetric rearrangements for the extended problem in .
As in [3,22], we define in the partial Schwartz symmetrization of a nonnegative function . By construction, the function is obtained by taking for almost every , the dimensional Schwartz symmetrization of the map
More precisely: for almost every fixed , the function is defined to be the unique radially symmetric decreasing function on such that for all
Proposition 2.10. Let be a nonnegative function and let as in . Then
and the following Pólya-Szegö type inequalities hold true
(2.13) |
Moreover, we have . In particular, we get
Proof. By density, it is enough to show the result for . In that case, the proof follows directly by [3,Proposition 3.2].
Now, a proof of the fractional isocapacitary inequality (1.7) can be obtained as a direct consequence of the previous result, applied to , and Remark 2.8.
In this section we give the proof of our main result. The idea consists in introducing quantitative elements in the proof of the isocapacitary inequality established in the previous section. As already explained in the Introduction, the major inconvenience when working with the extended problem in consists in the fact that we need to transfer information on the superlevel sets of the extension of the capacitary potential for fixed , to information on the superlevel sets of its trace in . This was done in Section 4 of [3] for a problem concerning the stability of the first eigenvalue of the Dirichlet fractional Laplacian.
We start by recalling the following technical result, whose proof can be found in [3,Lemma 4.1].
Lemma 3.1. Let be two measurable subsets of of finite measure and such that
for some . Then
where
(3.1) |
The following lemma corresponds to Proposition 2.6 of [3]. Observe that, here, the extension of a function does not belong to for fixed ; nevertheless, with the same computations as in [3], we can show that it is close enough (depending on ) in to its trace . We report a proof of the lemma for the sake of completeness.
Lemma 3.2. For any , denoting by its extension, there holds
where is given in .
Proof. We first prove the following preliminary fact
(3.2) |
where Indeed, multiplying and dividing by and applying Cauchy-Schwartz inequality yields
where, in the last step, we used the fact that
The proof of (3.2) ends by observing that
Let us now consider . By definition of and Minkowski's inequality we have that
for any , where denotes the characteristic function of the ball . We conclude the proof applying Fatou's Lemma for and combining the resulting inequality with (3.2).
The following result allows us to focus on compact sets without loss of generality. Therefore hereafter in this section we always assume to be a compact set.
Lemma 3.3 (Reduction to compact sets). Let be a closed set with finite measure. Then
where .
Proof. The convergence of the capacity follows from the fact that, for any sequence of closed sets such that , there holds
which, in turn, can be proved by following step by step the proof of [16,Theorem 4.15,Point ]. The rest of the proof is easy and can be omitted.
Hereafter in this section, for and , we let
We notice that, by the continuity of , it follows that , as . Moreover, we set, respectively
We immediately observe that is left-continuous and non-increasing in and that
We now let
(3.3) |
The constant is chosen in and will be settled later on. Notice that if then . In addition, in view of the left-continuity of , we know that
(3.4) |
The following proposition, which will be crucial in the proof of our main result, allows to bound from below the asymmetry of the superlevel sets of with the asymmetry of (for certain levels and for small enough).
Proposition 3.4. Let be such that , , and let as in . Set also . Then, letting be as in , if
we have
(3.5) |
and
(3.6) |
where and is as in .
Proof. Let . First of all, we observe that, by triangle inequality
(3.7) |
where, in the last inequality we used the definition of and the facts that and . For we have that
which shows that . Hence, using Chebichev's inequality and Lemma 3.2 with , it holds that
as long as . Similarly, one can check that
as long as . This, together with (3.7) entails that
which proves (3.5). Finally, applying Lemma 3.1 (with and chosen to be in ), we deduce (3.6).
We can now give the proof of our main result.
Proof of Theorem 1.1. By scaling invariance, we may assume . We also fix throughout the proof. We have to prove that
(3.8) |
for some , where is a ball with unit volume.
We start by observing that if , then, since , we easily deduce that
which proves (3.8) with .
Thus, we can just consider the case in which
We recall that in (3.3), we have defined the level as
Notice that by Lemma 3.1 as long as it holds
for some independent of and .
We now distinguish between two cases, in terms of the relation of with . More precisely, we let
(3.9) |
We consider the ranges
. In this case we argue as in the proof of Proposition 4.4 and of Theorem 1.3 (case ) in [3]. The idea consists in introducing quantitative elements in the proof of the Pólya-Szegö inequality by applying the quantitative isoperimetric inequality on each (horizontal) level set of the function .
First, we recall that
For what concerns the -derivative, from (2.13) we know that
For the -derivative, we argue as in the local case. By the coarea formula, we have
(3.10) |
where denotes the perimeter of the set , and in the last step we have used Jensen's inequality. Using the quantitative isoperimetric inequality, one can prove that
(3.11) |
where
and
The proof of (3.11) can be easily carried out by following [4,Lemma 2.9], see also the proof of Proposition 4.4 in [3]. By definition of symmetric rearrangement, we have that
and, from Lemma 3.2 and inequality in [12] (see also in [20]), we know that
Therefore, combining (3.10) and (3.11) with this last inequality, we can estimate the -norm of the -gradient as follows
Moreover, we have seen in the proof of Theorem 2.10 that
Collecting all together, we obtain
We use now Proposition 3.4, to pass from to . Let be as in Proposition 3.4. Set also . We have
where, in the last inequality, we used (3.6). It remains to estimate the term
(3.12) |
First, we observe that, since , using (3.5) and the fact that , we have
Hence, in order to estimate (3.12), it is enough to control from below the quantity
By Jensen inequality and recalling the definition of , we have
Using again (3.5), we deduce that
We can now estimate (3.12) as follows
where depends only on and . This, in turn, yields the following estimate for the capacity variation
with depending only on and . Finally, recalling the definition of (in Proposition 3.4) and that we are in the ranges and , we obtain
with depending on and . In particular, given it is possible to see that
(3.13) |
Therefore (3.8) holds with
In this case we get the quantitative inequality by means of a suitable test function for the definition of . Let us define
It is possible to see that is an admissible competitor for . Indeed, let , where denotes a standard mollifier in and is defined as follows
It is clear that and that
Moreover, if we let , one can see that and, thanks to the continuity of , that in an open , thus implying that , being such that in a neighbourhood of . Finally, it is easy to check that in as , which means that . Therefore, we have
From the previous inequality, the isocapacitary inequality (1.7) and (3.4) we obtain that
(3.14) |
By convexity, we know that
(3.15) |
and that
(3.16) |
with as in (3.9). By the definition of and , as in (3.9), we derive that
(3.17) |
with
Putting together (3.14) with (3.15), (3.16) and (3.17), we obtain (3.8) with , thus concluding the proof.
Remark 3.5. By carefully scanning the proof of Theorem 1.1, one can explicitly find the constant appearing in (1.8), which amounts to
where is as in (3.9), and depend only on (indeed their dependence on stated in the proof of Theorem 1.1 is actually pointless, being universally fixed in , see (3.13) for the explicit value of ). The constants and are as in (2.10) and (2.7), respectively, and they are uniformly bounded away from and as , see [3,Remark 2.7]. Moreover, it can be easily checked that this fact holds true for the constant as well, by its definition, when .
In this section we prove Thereom 1.2. We recall the definition of the standard (Newtonian) capacity of a closed for , which is equivalent to (1.2) when is a compact set
(4.1) |
where is such that in an open neighbourhood of and, for any open , the space is defined as the completion of with respect to the norm
We also recall that, when is bounded, then the space coincides with the usual Sobolev space , thanks to the validity of the Poincaré inequality.
Proof of Proposition 1.2. Proof of (1.9). The first part of the statement follows easily by a celebrated result by Bourgain-Brezis-Mironescu stating that
Indeed, by taking a function satisfying , we deduce that
Finally, (1.9) follows by taking the infimum over all admissible .
Proof of (1.10). Let us fix and let us denote by the -capacitary potential of the set . Since is compact, there exists a ball which contains . Hence, we have that
This can be easily proved by taking the Kelvin transform of the above functions and applying the maximum principle as in [9,Theorem 3.3.2]. We can now take advantage of the following precise decay rate of , established in [6,Proposition 3.6]:
(4.2) |
Let us define an almost optimal function, given by a suitable truncation of . For any fixed , we set
We claim that and , with being such that in an open neighbourhood of . Indeed, since , there exists a sequence such that in as . If we now let we have that and in an open . Now, if we consider the function
we have that and . Moreover, in as , thus proving the claim. We also observe that the family satisfies the following properties:
there exists , depending only on , such that
This follows by the upper bound (4.2): we choose with being such that .
In particular, this implies that , where, for any open we denote
which, in case is bounded and has Lipschitz boundary, coincides with the space , see [7,Proposition B.1];
there holds
(4.3) |
for any and , with independent of and . This is a direct consequence of (1.9).
Hence we can apply [7,Proposition 3.6] to the family to deduce that there exists an increasing sequence converging to and a function such that
Analogously, being a Lipschitz domain, we know that and that
for all and , with independent of and . Therefore, we can apply [7,Proposition 3.6] to the family as well, and this entails the existence of a (not relabeled) subsequence and of a function such that
where the functions are trivially extended in . As a consequence, we obtain that
which, in turn, implies that the trivial extension of to the whole is an admissible competitor for . Hence we have
where in the second inequality we have used the -convergence result by Brasco, Parini, Squassina (more precisely, Proposition 3.11 in [7]) and in the last one (4.3). Finally, we conclude by letting .
Remark 4.1. Thanks to Theorem 1.2 it is possible to explicitly compute the limit as of the constant as in Theorem 1.1, which coincides with the constant appearing in Corollary 1.3. Indeed, from the definitions of and , given in (2.10) and (2.7) respectively, and the property of the Gamma function, it is easy to see that
Moreover, if denotes the unitary ball in , in view of Theorem 1.2 we have that
see e.g., [27,Theorem 2.8,point ] for the explicit value of . Thanks to these facts, Remark 3.5 and basic calculus, it is easy to see that
where
and as in (3.13).
E. Cinti is partially supported by MINECO grant MTM2017-84214-C2-1-P, by Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM), and is part of the Catalan research group 2017 SGR 1392. B. Ruffini acknowledges partial support from the ANR-18-CE40-0013 SHAPO financed by the French Agence Nationale de la Recherche (ANR). R. Ognibene acknowledges support from the MIUR-PRIN project No. 2017TEXA3H. A special thank goes to L. Brasco for several fruitful discussions and suggestions. Eventually, the authors thank Italy national football team for providing a joyful atmosphere during the last stages of the drafting process of this work.
The authors declare no conflict of interest.
[1] | B. McMahan, E. Moore, D. Ram-age, S. Hampson, B. Arcas, Communication-efficient learning of deep networks from decentralized data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, (2017), 1273–1282. |
[2] | W. Chang, R. Tandon, Communication efficient federated learning over multiple access channels, arXiv preprint, (2020), arXiv: 2001.08737. https://doi.org/10.48550/arXiv.2001.08737 |
[3] | M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609. |
[4] | X. Zhang, M. Fang, J. Liu, Z. Zhu, Private and communication-efficient edge learning: a sparse differential Gaussian-masking distributed SGD approach, in MOBIHOC Mobile and Ad Hoc Networking and Computing, (2020), 261–270. https://doi.org/10.1145/3397166.3409123 |
[5] | J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887 |
[6] | L. Melis, C. Song, E. D. Cristofaro, V. Shmatikov, Exploiting unintended feature leakage in collaborative learning, in 2019 IEEE Symposium on Security and Privacy (SP), (2019), 691–706. https://doi.org/10.1109/SP.2019.00029 |
[7] | L. Zhu, Z. Liu, S. Han, Deep leak-age from gradients, arXiv preprint, (2019), arXiv: 1906.08935. https://doi.org/10.48550/arXiv.1906.08935 |
[8] | J. Geiping, H. Bauermeister, H. Dröge, M. Moeller, Inverting gradients–how easy is it to break privacy in federated learning, in NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, (2020), 16937–16947. |
[9] | C. Dwork, A. Roth, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci., 9 (2014), 211–407. |
[10] | D. Liu, O. Simeone, Privacy for free: wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2021), 170–185. |
[11] |
A. Islam, S. Shin, A digital twin-based drone-assisted secure data aggregation scheme with federated learning in artificial Intelligence of Things, IEEE Network, 37 (2023), 278–285. https://doi.org/10.1109/MNET.001.2200484 doi: 10.1109/MNET.001.2200484
![]() |
[12] | J. Ma, S. Naas, S. Sigg, X. Lyu, Privacy-preserving federated learning based on multi-key homomorphic encryption, arXiv preprint, (2021), arXiv: 2104.06824. https://doi.org/10.48550/arXiv.2104.06824 |
[13] | D. Byrd, A. Polychroniadou, Differentially private secure multi-party computation for federated learning in financial applications, in Proceedings of the First ACM International Conference on AI in Finance, (2020), 1–9. https://doi.org/10.1145/3383455.3422562 |
[14] | A. Islam, A. Amin, S. Shin, FBI: A federated learning-based blockchain-embedded data accumulation scheme using drones for Internet of Things, IEEE Wireless Commun. Lett., 11 (2022), 972–976. |
[15] | Y. Wang, B. Balle, S. Kasiviswanathan, Subsampled Rényi differential privacy and analytical moments accountant, in The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, (2019), 1226–1235. |
[16] | M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, et al., Deep learning with di erential privacy, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, (2016), 308–318. https://doi.org/10.1145/2976749.2978318 |
[17] | N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, H. Mcmahan, cpSGD: Communication-efficient and differentially-private distributed SGD, arXiv preprint, (2018), arXiv: 1805.10559, https://doi.org/10.48550/arXiv.1805.10559 |
[18] | I. Mironov, Rényi differential privacy, arXiv preprint, (2017), arXiv: 1702.07476. https://doi.org/10.48550/arXiv.1702.07476 |
[19] | I. Mironov, K. Talwar, L. Zhang, Rényi differential privacy of the sampled Gaussian mechanism, arXiv preprint, (2019), arXiv: 1908.10530. https://doi.org/10.48550/arXiv.1908.10530 |
[20] | L. Wang, R. Jia, D. Song, D2P-fed: Differentially private federated learning with efficient communication, arXiv preprint, (2021), arXiv: 2006.13039. https://doi.org/10.48550/arXiv.2006.13039 |
[21] | R. Geyer, T. Klein, M. Nabi, Differentially private federated learning: a client level perspective, arXiv preprint, (2018), arXiv: 1712.07557. https://doi.org/10.48550/arXiv.1712.07557 |
[22] | M. Du, K. Wang, Z. Xia, Y. Zhang, Differential privacy preserving of training model in wireless big data with edge computing, IEEE Trans. Big Data, 6 (2018), 283–295. |
[23] | M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609. |
[24] | D. Liu, O. Simeone, Privacy for free: Wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2020), 170–185. |
[25] | J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887 |
[26] | K. Wei, J. Li, C. Ma, M. Ding, C. Chen, S. Jin, et al., Low-latency federated learning over wireless channels with differential privacy, IEEE J. Sel. Areas Commun., 40 (2021), 290–307. |
[27] | C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in Theory of Cryptography, Springer, (2006), 265–284. https://doi.org/10.1007/11681878_14 |
[28] | C. Dwork, K. Kenthapadi, F. Mcsherry, I. Mironov, M. Naor, Our data, ourselves: Privacy via distributed noise generation, in Advances in Cryptology-EUROCRYPT 2006, Springer, (2006), 486–503. |
[29] | S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, A. Smith, What can we learn privately, arXiv preprint, (2010), arXiv: 0803.0924. https://doi.org/10.48550/arXiv.0803.0924 |
[30] | B. Balle, G. Barthe, M. Gaboardi, J. Hsu, T. Sato, Hypothesis testing interpretations and any differential privacy, arXiv preprint, (2019), arXiv: 1905.09982. https://doi.org/10.48550/arXiv.1905.09982 |
[31] | C. L. Canonne, G. Kamath, T. Steinke, The discrete Gaussian for differential privacy, arXiv preprint, (2021), arXiv: 2004.00010. https://doi.org/10.48550/arXiv.2004.00010 |
[32] | B. Balle, G. Barthe, M. Gaboardi, Privacy amplification by subsampling: Tight analyses via couplings and divergences, arXiv preprint, (2018), arXiv: 1807.01647. https://doi.org/10.48550/arXiv.1807.01647 |
[33] |
M. S. E. Mohamed, W. T. Chang, R. Tandon, Privacy amplification for federated learning via user sampling and wireless aggregation, IEEE J. Sel. Areas Commun., 39 (2021), 3821–3835. https://doi.org/10.1109/JSAC.2021.3118408 doi: 10.1109/JSAC.2021.3118408
![]() |
[34] | A. Rakhlin, O. Shamir, K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, arXiv preprint, (2012), arXiv: 1109.5647. https://doi.org/10.48550/arXiv.1109.5647 |
[35] | D. Basu, D. Data, C. Karakus, S. Diggavi, Qsparse-Local-SGD: distributed SGD with quantization, sparsification, and local computations, arXiv preprint, (2019), arXiv: 1906.02367. https://doi.org/10.48550/arXiv.1906.02367 |
1. | Ahmed Refaie Ali, Daniyal Ur Rehman, Najeeb Alam Khan, Muhammad Ayaz, Asmat Ara, M. Ijaz Khan, Integrating fractional-order SEI1I2I3QCR model with awareness and non-pharmaceutical interventions for optimal COVID-19 pandemic, 2025, 25, 1471-2288, 10.1186/s12874-024-02452-7 | |
2. | Mohamad Chakroun, Samir Haddad, Wafaa M.R. Shakir, Jinane Sayah, Chadi Kallab, Salah Falou, 2024, Recent Monte Carlo-Based Simulations for Modeling Disease Spread, 979-8-3503-6756-0, 1, 10.1109/ICCA62237.2024.10927993 |