Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling

  • A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).

    Citation: Qingjie Tan, Xujun Che, Shuhui Wu, Yaguan Qian, Yuanhong Tao. Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling[J]. Electronic Research Archive, 2023, 31(11): 7021-7039. doi: 10.3934/era.2023356

    Related Papers:

    [1] Youqun Long, Jianhui Zhang, Gaoli Wang, Jie Fu . Hierarchical federated learning with global differential privacy. Electronic Research Archive, 2023, 31(7): 3741-3758. doi: 10.3934/era.2023190
    [2] Zhuang Wang, Renting Liu, Jie Xu, Yusheng Fu . FedSC: A federated learning algorithm based on client-side clustering. Electronic Research Archive, 2023, 31(9): 5226-5249. doi: 10.3934/era.2023266
    [3] Seyha Ros, Prohim Tam, Inseok Song, Seungwoo Kang, Seokhoon Kim . A survey on state-of-the-art experimental simulations for privacy-preserving federated learning in intelligent networking. Electronic Research Archive, 2024, 32(2): 1333-1364. doi: 10.3934/era.2024062
    [4] Yunfei Tan, Shuyu Li, Zehua Li . A privacy preserving recommendation and fraud detection method based on graph convolution. Electronic Research Archive, 2023, 31(12): 7559-7577. doi: 10.3934/era.2023382
    [5] Nihar Patel, Nakul Vasani, Nilesh Kumar Jadav, Rajesh Gupta, Sudeep Tanwar, Zdzislaw Polkowski, Fayez Alqahtani, Amr Gafar . F-LSTM: Federated learning-based LSTM framework for cryptocurrency price prediction. Electronic Research Archive, 2023, 31(10): 6525-6551. doi: 10.3934/era.2023330
    [6] Shuang Yao, Dawei Zhang . A blockchain-based privacy-preserving transaction scheme with public verification and reliable audit. Electronic Research Archive, 2023, 31(2): 729-753. doi: 10.3934/era.2023036
    [7] Xiaoyu Jiang, Ruichun Gu, Huan Zhan . Research on incentive mechanisms for anti-heterogeneous federated learning based on reputation and contribution. Electronic Research Archive, 2024, 32(3): 1731-1748. doi: 10.3934/era.2024079
    [8] Weiwei Lai, Yinglong Zheng . Speech recognition of south China languages based on federated learning and mathematical construction. Electronic Research Archive, 2023, 31(8): 4985-5005. doi: 10.3934/era.2023255
    [9] Mengjie Xu, Nuerken Saireke, Jimin Wang . Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067
    [10] Xiongfei Li, Shuyu Li, Hao Xu, Yixuan Zhang . A differentially private distributed collaborative XGBoost method. Electronic Research Archive, 2024, 32(4): 2865-2879. doi: 10.3934/era.2024130
  • A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wireless transmission scheme termed the subsampling privacy-enabled RDP wireless transmission system (SS-RDP-WTS), which can reduce the communication and computing overhead in the process of learning but also enhance the privacy protection ability of federated learning. We proved our scheme's convergence and analyzed its privacy guarantee, as well as demoonstrated the performance of our scheme on the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research, 10 classes datasets (CIFAR10).



    The proliferation of advanced technologies, including Artificial Intelligence (AI), internet of things (IoT), and cloud computing, has led to the emergence of federated learning (FL) as a very promising distributed machine learning approach [1]. Specifically, using the edge network could benefit FL by enhancing the efficiency of learning updates and responses. Meanwhile, FL encounters the challenge of costly communication and time-sensitive training when deploying FL in the edge network. Hence, wireless multi-access channels are introduced in reference [2] as a means to transmit parameters or gradient information, therefore mitigating communication overhead. Furthermore, multi-channel communication in intricate wireless communication networks can conserve resources, provided that the communication information is effectively distributed across various channels [3]. Therefore, the proposed approach is to incorporate the gradient compression mechanism into FL in order to enhance communication efficiency. Gradient compression strategies typically encompass methods like sparsification and quantization. The sparsification technique is employed to compress the local model before transmission to the parameter server (PS), significantly reducing communication costs [4]. Nevertheless, it is possible that the process of sparsification could potentially impede the rate at which convergence occurs. Quantization can potentially decrease the amount of data in a given communication round prior to transmitting gradients to the PS, potentially expediting the convergence process [5].

    Although FL addresses certain challenges related to safeguarding data privacy, it has vulnerabilities in terms of security [6,7,8]. One of the primary factors contributing to this issue is the need for more consideration for safeguarding the model parameters in FL. In scenarios where servers and clients are semi-honest, the potential for the inference of sensitive information exists, thus leading to privacy breaches. In this context, current approaches integrate FL with privacy-preserving technologies to augment privacy. These technologies encompass differential privacy (DP) [9,10,11], homomorphic encryption [12], secure multi-party computation [13] and Blockchain [14], among others. DP stands out among the various methods due to its rigorous mathematical description and efficient computational requirements [15].

    Additionally, DP can precisely measure data privacy and loss [16,17]. Therefore, it is essential to highlight that a prominent area of focus in the field of DP pertains to exploring various forms of divergence for developing differential privacy variations. In this regard, Rényi differential privacy (RDP) has demonstrated the capability to yield robust privacy outcomes by employing a sequence of random mechanisms when accessing device datasets [18,19,20]. The implementation of DP has the potential to enhance local data privacy. However, it is essential to note that employing encryption procedures with DP may result in increased computational overhead and a decrease in training accuracy, as stated in reference [21]. Therefore, optimizing FL's performance by minimizing communication overhead and computing costs while simultaneously upholding data privacy and security emerges as a pivotal concern.

    While DP can effectively safeguard private data by introducing noise, it also introduces potential challenges, such as increased computing overhead and a potential decrease in training accuracy [21]. Motivated by the inherent danger, we present a novel framework SS-RDP-WTS. The proposed scheme involves the utilization of multi-access channels within a wireless communication network to ensure data privacy protection and efficient communication in the context of FL. The technique of gradient compression is implemented by employing a multi-level stochastic quantization approach inside a federated learning framework that incorporates wireless multiple access channels. This approach effectively reduces the burden of communication and processing expenses by selectively sampling and reducing randomness. Furthermore, considering the favorable combination aspects of the RDP, we propose augmenting the privacy protection capability of the enhanced method. The safeguarding of model parameters is achieved through discrete Gaussian noise during client updates, enhancing their security. Additionally, clients' privacy is further ensured by leveraging the privacy amplification effect of uniform subsampling. Our paper provides a theoretical formulation for a comprehensive distributed mean estimation problem. The empirical findings demonstrate that the subsampling private wireless transmission technique efficiently ensures a balance between model utility and communication while offering enhanced privacy protection.

    The remainder of this paper is organized as follows. Section 2 describes the relevant prerequisites, including basic definitions and theorems related to DP. Section 3 introduces the proposed system model with a subsampling private wireless transmission scheme. Section 4 analyzes the system model's privacy analysis and convergence rate, section 5 evaluates its performance, and section 6 presents the conclusions.

    In this section, we present the system model and give RDP and its related definitions. The commonly used notations are listed in Table 1.

    Table 1.  Summary of the main notations.
    Symbol Description
    ε Privacy budget
    δ The upper bound of the probability of differential privacy failure
    α The order of the ratio of two probability distributions
    σk,t Noise level of client k in iteration t
    wt The parameter vector in iteration t
    (w) Loss function
    Δf The function of sensitivity
    k The number of client
    u(k)i The data point i of client k
    v(k)i The label corresponding to the data point i of clientk
    Dk Local dataset of client k
    N Total number of all clients
    T Total number of iterations
    gt The gradient vector in iteration t
    ˆgt The noisy versions of the perturbed local gradients in iteration t
    b Quantization levels
    qtk Quantized gradient vector of client k in iteration t
    xtk Model parameters after encoding operation
    P Sampling probability
    Kt Subsampling randomly selected client collection
    d Feature dimension
    C The capacity of wireless multiple access channels
    nk,t Artificial noise of client kin iteration t

     | Show Table
    DownLoad: CSV

    Wireless FL is a distributed intelligent computing paradigm that aims to protect clients' data privacy while effectively utilizing decentralized data resources and computing power in large-scale devices through local model training on edge devices. In the wireless communication network environment, due to the rapid change of data and the complexity of edge devices, wireless FL can not only update the model in real time, but also improve the learning effect. Moreover, it enables FL to train in a broader range of data and more powerful computing resources to improve the accuracy and performance of the model.

    One potential approach to addressing the FL objective while safeguarding client privacy involves training the model using a modified and distorted representation of the data, which does not expose the actual data directly during the training process. To solve this privacy issue, Du et al. [22] implemented a machine learning strategy for smart edges using DP. Seif et al. [23] studied the problem of FL over a wireless channel, modeled by a Gaussian multiple access channel (MAC) and subject to local differential privacy constraints. Liu et al. [24] demonstrated that as long as the privacy constraint level, measured via DP, is below a threshold that decreases with the signal-to-noise ratio, the uncoded transmission achieves privacy "for free", i.e., without affecting the learning performance. During the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication bottleneck. To address these problems, Ding et al. [25] proposed two DP and communication efficient algorithms. In this purpose, Wei et al. [26] to minimized FL training delay over wireless channels, constrained by overall training performance as well as each client's DP requirement.

    It is known that DP has put private data analysis on the firm theoretical foundation [27,28]. We present the definitions of DP below.

    Definition 1. (ε - DP [29]). For ε>0, a randomized algorithm M is said to be ε -differentially private if, and only if, for any two adjacent sets D1,D2 and any SRange(M), it holds that

    Pr[M(D1)S]exp(ε)×Pr[M(D2)S], (2.1)

    where ε is the privacy budget.

    In order to attain anonymity, it is imperative that the algorithm M has a randomized component. A reduced privacy budget implies that the adversary has less capacity to discern the existence of any data based on the output.

    Definition 2. ((ε,δ) - DP [9]). For ε,δ>0, a randomized algorithm M is said to be (ε,δ) -differentially private if, and only if, for any two adjacent set D1,D2 and any SRange(M), it satisfies that

    Pr[M(D1)S]exp(ε)×Pr[M(D2)S]+δ. (2.2)

    Definition 3. (RDP [19]). For ε>0 and α>1, a randomized mechanism M satisfies (α,ε) -RDP if, and only if, for any two adjacent sets D1,D2S, it holds that

    Dα(M(D1)M(D2))=1α1log(EθM(D2)[(M(D1)(θ)M(D2)(θ))α])ε. (2.3)

    It is evident that RDP is strictly stronger than (ε,δ)-DP for δ>0, and it enables more precise bounds for the composition of the Gaussian mechanism. The subsequent result is employed to transform an RDP into a central DP.

    Lemma 1. (From RDP to DP [30]). Suppose for any α>1, a mechanism M is (α,ε) -RDP, then, the mechanism M is (ε+log(1/δ)/(α1),δ) -DP for all 0<δ<1.

    DP employs the concept of sensitivity to quantify the level of data privacy exposure. Sensitivity refers to the utmost extent to which the outcomes of a complete database query are influenced by the alteration of a single individual's data. Nevertheless, RDP offers an enhanced flexibility and a wider range of choices with the incorporation of a sensitivity parameter α for measurement.

    To begin, RDP enables the selection of suitable sensitivity levels based on diverse application scenarios, hence, improving the adaptability of privacy protection to a range of data processing requirements. RDP, for instance, was equal to DP at α=1. However, RDP is appropriate for instances with higher privacy requirements and might offer stronger privacy protection when α>1. In contrast, when α<1, enabling RDP resulted in increased query accuracy. On the contrary, when α<1, RDP led to a notable improvement in query accuracy.

    Furthermore, when compared to DP, RDP demonstrates superior efficacy in safeguarding privacy inside certain attack models. In certain scenarios, such as when facing a reorganization assault or when numerous queries are allowed, it has been observed that DP may not offer adequate security for privacy. DP synthesis can be achieved in RDP by strategically selecting suitable α values, resulting in enhanced privacy protection performance.

    Furthermore, RDP exhibits superior composability. This feature enables the integration of diverse privacy strategies in order to enhance privacy protection, while still upholding privacy performance. The inherent composability of this feature holds significant importance in facilitating advanced data analysis and ensuring the secure exchange of data.

    We state the definitions of the discrete Gaussian mechanism and establish relevant privacy guarantees. We first introduce discrete Gaussian distribution.

    Definition 4. (Discrete Gaussian distribution [31]). Let σR with σ>0. Discrete Gaussian distribution is a probability distribution on a discrete additive subgroup L. The discrete Gaussian distribution with scale σ is denoted as NL(σ), xL, and the probability mass on x is proportional to ex2/2σ2.

    In order to ascertain the magnitude of the introduced noise, it is necessary to impose limitations on the 2 -sensitivity of the gradient aggregation. Within the context of DP, the process of calibrating the amount of noise to be added is influenced by the sensitivity of the function, as explicitly specified as follows.

    Definition 5. (2 -sensitivity [32]). Suppose that a function f:DR and two adjacent datasets are D1 and D2. The 2 -sensitivity of f is defined as ΔfΔ=maxD1,D2Df(D1)f(D2)2.

    Lemma 2. (RDP for discrete Gaussian mechanism [18]). A mechanism, which alters the output of another algorithm range(f)L, by adding Gaussian noise, i.e., f(.)+NL(σ), satisfies (α,ε)-RDP with ε=α(Δf)2/2σ2.

    The discrete Gaussian mechanism is realized by adding noise with discrete Gaussian distribution to the output function evaluated on a sensitive dataset. Canonne et al. (2020) demonstrated concentrated DP for the discrete Gaussian mechanism and provided a thorough analysis of the privacy and utility properties of the discrete Gaussian [30]. For FL, Wang et al. (2021) proposed a discrete Gaussian-based RDP with an analytical moments accountant-based RDP [20].

    It is significant to exploit the randomness in subsampling. Wang et al. (2018) remarked that discrete Gaussian exhibits tight privacy amplification bound via subsampling and indicated that RDP enables the discrete Gaussian mechanism to be composed tightly with an analytical moments accountant, which saves the privacy budget in a multi-round FL [33].

    This section presents our proposed SS-RDP-WTS, as shown in Figure 1.

    Figure 1.  Illustration of SS-RDP-WTS.

    The proposed system performs an iterative process in which the following steps are repeated: 1) The PS broadcasts the initialization model to a randomly selected subset of local clients. Each client then performs local random gradient descent using its local data to obtain an updated local gradient. 2) To tackle the communication bottleneck, the local gradient is compressed using multi-level random ladder quantification for the clients that are selected through subsampling. 3) To further enhance the algorithm's privacy protection capability, discrete Gaussian noise is employed in the quantized discretized gradient. 4) The model parameters of the noisy version are transmitted through wireless global mobile access communication (GMAC) to improve the communication efficiency of the model. 5) The PS aggregates the uploaded model parameters in an average manner to obtain a new global model.

    SS-RDP-WTS: We study an FL system with with a central server, N clients, and a wireless communication technique. The clients establish communication with the processing system through wireless GMAC technology. The pseudocode of this algorithm is shown in Algorithm 1.

    Algorithm 1: SS-RDP-WTS
     Input: N is the total number of clients; ηt is the learning rate; nk,t is the noise of client k; mt is the channel noise; btk is the quantization level.
    1 for t[T] do

    Each client k has a local dataset Dk, which includes the number of data points, denoted as u(k)i. Each data point i within client k's dataset is associated with a label, denoted as v(k)i. Its size is denoted as |Dk|, and so Dk={(u(k)i,v(k)i)}|Dk|i=1. The client establishes communication with the PS through GMAC to train the model. This training process involves minimizing the loss function (w)

    w=argminw(w)Δ=1|Dtotal|Ktk=1|Dk|i=1k((u(k)i,v(k)i);w), (3.1)

    where wRd is the parameter vector, k() is the loss function for client k, and Dtotal=Ktk=1Dk is the total dataset participating in training minimized (w) iteratively by the stochastic gradient descent (SGD) algorithm. In the training iteration t, the PS broadcasts the global parameter vector wt to the clients. Each client k calculates its local gradient on the local dataset

    gk(wt)=1|Dk||Dk|i=1fk((u(k)i,v(k)i);w). (3.2)

    Then, the clients statistically quantify the gradient values into a discrete domain and implement the discrete Gaussian technique to ensure anonymity. During the quantification process, quantification parameters are optimized considering the capacity level of the wireless GMAC. To streamline the presentation, we have omitted the iteration index t from our paper. During each iteration, each client performs quantization on its local vector gtk by dividing it into btk discrete levels, ensuring that the quantized value lies within the range [gmaxk,gmaxk]. Let gmaxk be the clipping bound L. For every integer φ in the range [0,btk),

    Bk(φ)Δ=gmaxk+φskbtk1, (3.3)

    where btk2 represents the quantization level for client k. It is usual to choose sk as 2gmaxk, and Bk(φ) is the index. If gk,j[Bk(φ),Bk(φ+1)) holds for the client k, it follows that

    qk,j={Bk(φ+1),w.p.gk,jBk(φ)Bk(φ+1)Bk(φ),Bk(φ),otherwise. (3.4)

    So, qk,j is an unbiased estimator of gk,j, meaning that E[qtk,j]=gtk,j. Additionally, the variance can be bounded

    Var[qk,j](sk)2/4(btk1)2=(2gmaxk)2/4(btk1)2=L2/(btk1)2. (3.5)

    After quantizing the complete gradient vector, the Gaussian technique is subsequently employed. This method is not suited for transferring quantized local gradients. One alternative method for ensuring privacy is the incorporation of discrete Gaussian noise. If the local privacy model is outside the quantization range, it is to employ preprocessing techniques to substitute it.

    Subsequently, client k uses a preprocessing function to acquire the code word xtk, denoted as xtk=ftk(qtk,j), and transmits it to PS. In our paper, we examine the random set of participants Kt, obtaining using uniform subsampling. Among them, the client k participates in the training process during iteration t with probability P. When Kt=N, it can be inferred that all clients are engaged in the training. The signal generated by the client k during iteration t is

    xtk={qtk+ntk,w.p.P=KtN,0,otherwise, (3.6)

    where nk,tNL(0,σ2k,tId).

    In addition, when clients transmit xtk to the PS through wireless GMAC, a total of dlog2b bits are required for transmitting present the quantized gradient vector, where the transmission rate of clients is rtk=dlog2btk, which adheres to the GMAC capacity region [30] given by the inequality

    Ktk=1rtkCKt,Kt[N],|Kt|=1,N, (3.7)

    where CKt is the combined capacity of wireless GMAC in subset Kt. Suppose that the channel inputs during each iteration adhere to the average power constraint, denoted as xtk22NPk,k. Furthermore, CKt=0.5log(1+Ktk=1pk/N0).

    The input-output relationship over GMAC can be expressed as follows:

    yt=Ktk=1xtk+mt=Ktk=1qtk+Ktk=1ntk+mtZt. (3.8)

    Here, xtkRd is the signal transmitted by client k at iteration t and mtNL(0,N0) is the independent identically distribution (IID) channel noise.

    After the PS receives signal yt, the objective of the PS is to acquire the average value of the accurate gradient by utilizing a post-processing function ht(). This average value can be represented by ˆgt=[Ktk=1yk]/μ|Kt|. Nevertheless, due to the implementation of preprocessing and post-processing techniques, as well as the GMAC system capabilities, the PS can only decode and restore the perturbed local gradients.

    In this section, we present the primary results of our proposed approach. In this section, we analyze the RDP features of the subsampling private wireless transmission system. Specifically, we discuss the privacy guarantee outlined in Theorem 1. Additionally, we introduce the privacy-convergence trade-off of the scheme, as shown in Theorem 2.

    The subsequent theorem elucidates the RDP guarantee associated with the discrete Gaussian mechanism within the subsampling privacy model, as outlined in Theorem 1. Initially, a process is conducted to select a subset of KtN clients from a more extensive set of clients. This subsampling is performed independently and uniformly, with each client having an equal likelihood of being selected. The probability of selection is denoted by P=Kt/N. Subsequently, within the chosen subset of k clients, each client k proceeds to quantize its respective local gradient and subsequently implements a discrete Gaussian process.

    Theorem 1. Let the bound for clipping be L, the scale of noise scale be σ and the quantization level be bk with bkbmin. The proposed scheme meets (α,ε(α))-RDP with

    ε(α)1α1log(1+P2(α2)min{2e4L2[(bmin1)+d]2σ2(bmin1)2,4(e4L2[(bmin1)+d]2σ2(bmin1)21)}+αx=32P2(αx)e(x1)2xL2[(bmin1)+d]2σ2(bmin1)2). (4.1)

    Proof. Note Lexp((x(1α)μ)2/2σ2)Lexp((xμ)2/2σ2)1 and range(f)L. Thus

    Dα(NL(0,σ2)||NL(μ,σ2))=1α1logL1ΣLexp((xμ)22σ2)exp(αx22σ2)exp((1α)(xμ)22σ2)=1α1logL1ΣLexp((xμ)22σ2)exp(x2+2(1α)μx(1α)(μ)22σ2)=1α1log{exp(α2α)μ22σ2}=αμ22σ2. (4.2)

    According to Lemma 1, the discrete Gaussian mechanism adheres to (α,α(Δf)2/2σ2) -RDP.

    Next, we prove the bound of the sensitivity for client k. Note yt=Ktk=1gtk+Zt and the variance of the effective Gaussian noise Zt is σ2=Ktk=1σ2k,t+N0 and ΔfΔ=maxD1,D2Df(D1)f(D2)2, gtk2L,k, where L is the Lipschitz parameter. Thus,

    Δftk=maxDk,Dkytyt2=maxDk,Dkgtkgtk2maxDk,Dk[gtk2+gtk2]2L. (4.3)

    According to [20], the upper bound on the quantized 2-sensitivity is denoted as

    Δf=2(L+dLbmin1). (4.4)

    Thus, it is sufficient to show that the composition of quantization and discrete Gaussian mechanism is (α,α(Δf)2/2σ2) -RDP if we have

    ε=α(2(L+dLbmin1))22σ2=2αL2[(bmin1)+d]2σ2(bmin1)2. (4.5)

    According to Theorem 9 in [15], we obtain

    ε(α)1α1log(1+P2(α2)min{2eε(2),4(eε(2)1)}+αx=32P2(αx)e(x1)ε(x))=1α1log(1+P2(α2)min{2e4L2[(bmin1)+d]2σ2(bmin1)2,4(e4L2[(bmin1)+d]2σ2(bmin1)21)}+αx=32P2(αx)e(x1)2xL2[(bmin1)+d]2σ2(bmin1)2). (4.6)

    Our scheme meets (α,ε(α)/2σ2) -RDP. This completes the proof of Theorem 1.

    Remark: Our proposed system offers a more robust privacy guarantee unlike prior research. This can be primarily illustrated by considering the following factors:

    1) The proposed scheme adheres to RDP principles, which offer enhanced privacy measures and a more stringent composition, which have been shown in [19].

    2) As a result of implementing privacy amplification through subsampling and the addition of discrete Gaussian noise, our scheme maintains a reduced noise level.

    Theorem 2. Assuming the loss function (.) is λ -strongly convex, μ-smooth, and L-Lipschitz gradient, the learning rate is ηt=1/λt. The convergence rate satisfies

    E[(wT)](w)2μλ2T2Tt=1((1+ρ)L2+d(1+ρ1)N2P2Ktk=1L2(btk1)2+dN2P2[N0+NPmaxσ2k,t]). (4.7)

    Proof. Note that

    E[(wT)(w)]2μTt=1G2tTλ2T=2μTt=1G2tλ2T2. (4.8)

    In the iteration t, the PS averages the received Kt gradients, which is

    ˆgt=1μ|Kt|kKtqtk+1μ|Kt|Zt. (4.9)

    After post-processing, we get ˜gt=˜f(ˆgt). Note that Kt is a binomial random variable, and the probability of each client being selected is the subsampling probability of P=Kt/N, so μ|Kt|=NP,σ2|Kt|=NP(1P).

    Then we obtain the bounds on the second moment of the gradient

    E[˜gt22]=E[gt+(˜gtgt)22]=E[gt22]+E[˜gtgt22]+2E[<(˜gtgt),gt>](a)(1+ρ)E[gt22]+(1+ρ1)E[˜gtgt22](b)(1+ρ1)E[˜gtgt22]+(1+ρ)L2Δ=G2t, (4.10)

    where (a) follows that for any two vectors m,nRd and ρ>0, 2m,nρm22+ρ1n22, (b) follows that the the local gradients satisfying the Lipschitz condition are bounded by gtk2L,k.

    According to the variance expression of the gradient estimate and the expected properties, we have

    E[˜gtgt22]=E[1μ|Kt|kKt(qtkgtk)+1μ|Kt|Zt22]=1μ2|Kt|E[kKt(qtkgtk)22]+1μ2KtE[Zt22]=1μ2|Kt|kKtdj=1E[(qtk,jgtk,j)2]+1μ2|Kt|E[Zt22](c)dN2P2Ktk=1L2(btk1)2+dN2P2[N0+NPmaxσ2k,t]. (4.11)

    where (c) follows by E[(qtk,jgtk,j)2]=Var[qtk,j]L2/(btk1)2, Zt=kKtnk,t+mtNL(0,σ2ZtId), and nk,tNL(0,σ2k,tId). Since mtNL(0,N0), we have

    E[Zt22]=dσ2Zt=d[N0+E[Kt]σ2k,t]d[N0+E[Kt]maxkσ2k,t]=d[N0+μKtmaxkσ2k,t]. (4.12)

    We complete the proof of Theorem 2.

    Remark: The convergence rate of our scheme is determined by applying the established convergence results of SGD, as in reference [34,35]. We subsequently compute the necessary parameters to ensure convergence. Furthermore, we demonstrate the potential for enhancing the convergence rate of our methodology by adjusting the quantization level, subsampling probabilities and noise parameters within the constraints of a specific privacy budget and communication set. The convergence rates per round of SS-RDP-WTS can be improved due to the more compact structure of subsampling RDP, given a specific noise level, in accordance with the convergence rate boundary. Simultaneously, discrete Gaussian noise offers more robust privacy assurances at equivalent noise levels.

    To characterize the convergence of our scheme, the optimum values of quantization level and noise parameters maximizing the convergence rate in Theorem 2 are the solutions to an optimization problem, but the optimization problem is an instance of constrained integer nonlinear programming (INLP). To simplify the calculation, we set some given noise parameters. Therefore, the optimization problem can be written as

    minbtkKtk=1L2(btk1)2,s.t.Ktk=1rtksCKt,Kt[N],|Kt|=1,N,btkbmin,btkZ+,k. (4.13)

    Computation cost analysis of SS-RDP-WTS: Our proposed approach involves calculating the gradient of the clients, which is influenced by factors such as the dataset size of each client and the complexity of the model. Typically, the computation of the gradient on the client side is considered a reasonably low-cost process, as it only entails navigating through local data and making adjustments to model parameters. The discretization operation is a technique that partitions the gradient values into distinct regions, enhancing the privacy preservation of the gradient. The discretization procedure is generally cost-effective because it mainly focuses on aligning gradient values with discrete regions. To safeguard individual privacy, noise is introduced as an additional measure. This is accomplished by perturbing the discretized gradient, a commonly employed and cost-effective procedure. Our proposed scheme achieves this objective by directly injecting noise into the gradient.

    Communication cost analysis of SS-RDP-WTS: The initial step involves selecting a smaller portion of the dataset, followed by determining the client's participation in training based on the probability P. The outcome of the client to engage in training is then communicated to the server. The communication overhead associated with this process is minimal, as it entails transmitting a single binary decision outcome. The second component pertains to the transportation gradient, wherein each individual communicates the discretized, noise-injected and amplified gradient to the server-side using the wireless GMAC. The presence of communication overhead is contingent upon the dimensionality and precision of the gradient. Lastly, the received gradient refers to the process in which the server side is responsible for receiving gradients from all clients and subsequently computing the global average gradient. This procedure entails obtaining gradient data from each client, which is contingent upon the number of clients and the quantity of the gradient data.

    In this section, we evaluate the efficacy of our proposed strategy. We examine two learning tasks utilizing convolutional neural networks training on CIFAR10 and MNIST datasets, employing with a cross-entropy loss function. The model's dimensionality on CIFAR10 and MNIST is d=62,006 and d=44,426, respectively. In our training process with the CIFAR10 dataset, we utilize IID partitions corresponding to 10 clients. However, in the MNIST experiment, we employ non-IID partitions corresponding to 10 clients.

    Furthermore, it is necessary to establish a given subsampling probability. In this context, let δ=105 for different subsampling probabilities P. In particular, when P=1, it indicates that all clients are involved in the training process. Let σk,t=1,t[T] be the Gaussian noise. Each client applies a clipping operation to the local gradient, utilizing an empirically determined Lipschitz constant, denoted as L=1. To perform the calculation of RDP accounting, the Google's DP library is employed. In the experiment, we conducted tests on the variable α within the range of two to 64, along with other numerical values like 128,256 and 512. Our objective was to identify the minimum value of ε through the process. The identified of the minimal value of the quantization level is given as bmin=101 in Figure 2 while bmin=64 in Figures 3 and 4. To facilitate the process of comparing different bounds, the RDP bounds are translated into the epsilon-delta differential privacy framework.

    Figure 2.  Privacy measured by for different subsampling probabilities across iterations.
    Figure 3.  Model accuracy measured for different noise levels across iterations.
    Figure 4.  Model accuracy measured for different privacy budgets across iterations.

    In Figure 2, it can be shown that the values ε corresponding to P=1 and P=0.1, exhibit relatively high levels, indicating a lower degree of privacy protection. It has been observed that reducing the subsampling probabilities leads to an improvement in the privacy guarantee. In the context of the actual wireless network environment, a significant number of clients are involved in the training process. Hence, it would be rational to choose for a reduced subsampling probability.

    Figure 3 examines the relationship between model accuracy and noise level variation, while maintaining a constant sampling probability. Hence, opt to select two clients for participation in the training process, with an average transmit power limitation of p1=320 and p2=80 for the respective datasets.

    The number of channels in the MAC is set to s=5d for each iteration. The minimum value of the quantization level is given as bmin=64. To maintain the quantized attributes, it is postulated that the privacy local model undergoes truncation when it is beyond the truncation threshold, denoted as qmax+3σk,t. Here, qmax denotes the highest value of the quantized gradient. The findings illustrate that the magnitude of the discrete Gaussian noise has a certain impact on the convergence rate outlined in Theorem 2, once the accuracy of the model has already reached convergence. Simultaneously, when the magnitude of noise decreases, the level of precision increases.

    Figure 4 illustrates the model's accuracy when subjected to the various privacy budgets. Let σk,t=1,t[T] be the Gaussian noise. As the subsampling private wireless transmission strategy converges, its accuracy gradually approaches that of the scheme without privacy, given the various privacy budgets and the same subsampling probability. Furthermore, the enhancement of our method in terms of model accuracy post-convergence has a greater magnitude on the MNIST dataset compared to the CIFAR10 dataset. Furthermore, it can be observed that the model achieves convergence with a reduced privacy budget when applied to the CIFAR10 dataset. This phenomenon occurs to a certain degree once the accuracy of the model has reached convergence. Simultaneously, when the scale of noise decreases, the level of precision increases.

    In this paper, we presented a novel SS-RDP-WTS that aimed to minimize communication costs and enhance the privacy of FL. In the FL framework with MAC, the model gradient was compressed using multi-level stochastic gradient quantization to address the constraint of limited communication resources. This compression technique helped minimize communication costs and enhance algorithm communication efficiency by means of subsampling. Additionally, the system incorporated the introduction of discrete Gaussian noise and leveraged the privacy amplification effect of subsampling to strengthen the privacy protection measures, taking into account the closely intertwined characteristics of RDP. The theoretical analysis of the subsampling private wireless transmission technique encompassed the examination of its convergence and privacy boundary. Based on the empirical findings, the implemented scheme exhibited the capability to enhance the efficacy of the FL algorithm, concurrently mitigating communication burdens and safeguarding data confidentiality.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the Major Research Plan of the National Natural Science Foundation of China (92167203), Key Program of Zhejiang Provincial Natural Science Foundation of China (LZ22F020007) and Zhejiang University of Science and Technology Postgraduate Research and Innovation Fund (2022yjskc24).

    The authors declare that they have no conflict of interest.



    [1] B. McMahan, E. Moore, D. Ram-age, S. Hampson, B. Arcas, Communication-efficient learning of deep networks from decentralized data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, (2017), 1273–1282.
    [2] W. Chang, R. Tandon, Communication efficient federated learning over multiple access channels, arXiv preprint, (2020), arXiv: 2001.08737. https://doi.org/10.48550/arXiv.2001.08737
    [3] M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609.
    [4] X. Zhang, M. Fang, J. Liu, Z. Zhu, Private and communication-efficient edge learning: a sparse differential Gaussian-masking distributed SGD approach, in MOBIHOC Mobile and Ad Hoc Networking and Computing, (2020), 261–270. https://doi.org/10.1145/3397166.3409123
    [5] J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887
    [6] L. Melis, C. Song, E. D. Cristofaro, V. Shmatikov, Exploiting unintended feature leakage in collaborative learning, in 2019 IEEE Symposium on Security and Privacy (SP), (2019), 691–706. https://doi.org/10.1109/SP.2019.00029
    [7] L. Zhu, Z. Liu, S. Han, Deep leak-age from gradients, arXiv preprint, (2019), arXiv: 1906.08935. https://doi.org/10.48550/arXiv.1906.08935
    [8] J. Geiping, H. Bauermeister, H. Dröge, M. Moeller, Inverting gradients–how easy is it to break privacy in federated learning, in NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, (2020), 16937–16947.
    [9] C. Dwork, A. Roth, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci., 9 (2014), 211–407.
    [10] D. Liu, O. Simeone, Privacy for free: wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2021), 170–185.
    [11] A. Islam, S. Shin, A digital twin-based drone-assisted secure data aggregation scheme with federated learning in artificial Intelligence of Things, IEEE Network, 37 (2023), 278–285. https://doi.org/10.1109/MNET.001.2200484 doi: 10.1109/MNET.001.2200484
    [12] J. Ma, S. Naas, S. Sigg, X. Lyu, Privacy-preserving federated learning based on multi-key homomorphic encryption, arXiv preprint, (2021), arXiv: 2104.06824. https://doi.org/10.48550/arXiv.2104.06824
    [13] D. Byrd, A. Polychroniadou, Differentially private secure multi-party computation for federated learning in financial applications, in Proceedings of the First ACM International Conference on AI in Finance, (2020), 1–9. https://doi.org/10.1145/3383455.3422562
    [14] A. Islam, A. Amin, S. Shin, FBI: A federated learning-based blockchain-embedded data accumulation scheme using drones for Internet of Things, IEEE Wireless Commun. Lett., 11 (2022), 972–976.
    [15] Y. Wang, B. Balle, S. Kasiviswanathan, Subsampled Rényi differential privacy and analytical moments accountant, in The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, (2019), 1226–1235.
    [16] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, et al., Deep learning with di erential privacy, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, (2016), 308–318. https://doi.org/10.1145/2976749.2978318
    [17] N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, H. Mcmahan, cpSGD: Communication-efficient and differentially-private distributed SGD, arXiv preprint, (2018), arXiv: 1805.10559, https://doi.org/10.48550/arXiv.1805.10559
    [18] I. Mironov, Rényi differential privacy, arXiv preprint, (2017), arXiv: 1702.07476. https://doi.org/10.48550/arXiv.1702.07476
    [19] I. Mironov, K. Talwar, L. Zhang, Rényi differential privacy of the sampled Gaussian mechanism, arXiv preprint, (2019), arXiv: 1908.10530. https://doi.org/10.48550/arXiv.1908.10530
    [20] L. Wang, R. Jia, D. Song, D2P-fed: Differentially private federated learning with efficient communication, arXiv preprint, (2021), arXiv: 2006.13039. https://doi.org/10.48550/arXiv.2006.13039
    [21] R. Geyer, T. Klein, M. Nabi, Differentially private federated learning: a client level perspective, arXiv preprint, (2018), arXiv: 1712.07557. https://doi.org/10.48550/arXiv.1712.07557
    [22] M. Du, K. Wang, Z. Xia, Y. Zhang, Differential privacy preserving of training model in wireless big data with edge computing, IEEE Trans. Big Data, 6 (2018), 283–295.
    [23] M. Seif, R. Tandon, M. Li, Wireless federated learning with local differential privacy, in 2020 IEEE International Symposium on Information Theory (ISIT), (2020), 2604–2609.
    [24] D. Liu, O. Simeone, Privacy for free: Wireless federated learning via uncoded transmission with adaptive power control, IEEE J. Sel. Areas Commun., 39 (2020), 170–185.
    [25] J. Ding, G. Liang, J. Bi, M. Pan, Differentially private and communication efficient collaborative learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 7219–7227. https://doi.org/10.1609/aaai.v35i8.16887
    [26] K. Wei, J. Li, C. Ma, M. Ding, C. Chen, S. Jin, et al., Low-latency federated learning over wireless channels with differential privacy, IEEE J. Sel. Areas Commun., 40 (2021), 290–307.
    [27] C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in Theory of Cryptography, Springer, (2006), 265–284. https://doi.org/10.1007/11681878_14
    [28] C. Dwork, K. Kenthapadi, F. Mcsherry, I. Mironov, M. Naor, Our data, ourselves: Privacy via distributed noise generation, in Advances in Cryptology-EUROCRYPT 2006, Springer, (2006), 486–503.
    [29] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, A. Smith, What can we learn privately, arXiv preprint, (2010), arXiv: 0803.0924. https://doi.org/10.48550/arXiv.0803.0924
    [30] B. Balle, G. Barthe, M. Gaboardi, J. Hsu, T. Sato, Hypothesis testing interpretations and any differential privacy, arXiv preprint, (2019), arXiv: 1905.09982. https://doi.org/10.48550/arXiv.1905.09982
    [31] C. L. Canonne, G. Kamath, T. Steinke, The discrete Gaussian for differential privacy, arXiv preprint, (2021), arXiv: 2004.00010. https://doi.org/10.48550/arXiv.2004.00010
    [32] B. Balle, G. Barthe, M. Gaboardi, Privacy amplification by subsampling: Tight analyses via couplings and divergences, arXiv preprint, (2018), arXiv: 1807.01647. https://doi.org/10.48550/arXiv.1807.01647
    [33] M. S. E. Mohamed, W. T. Chang, R. Tandon, Privacy amplification for federated learning via user sampling and wireless aggregation, IEEE J. Sel. Areas Commun., 39 (2021), 3821–3835. https://doi.org/10.1109/JSAC.2021.3118408 doi: 10.1109/JSAC.2021.3118408
    [34] A. Rakhlin, O. Shamir, K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, arXiv preprint, (2012), arXiv: 1109.5647. https://doi.org/10.48550/arXiv.1109.5647
    [35] D. Basu, D. Data, C. Karakus, S. Diggavi, Qsparse-Local-SGD: distributed SGD with quantization, sparsification, and local computations, arXiv preprint, (2019), arXiv: 1906.02367. https://doi.org/10.48550/arXiv.1906.02367
  • This article has been cited by:

    1. Kai Hu, Sheng Gong, Qi Zhang, Chaowen Seng, Min Xia, Shanshan Jiang, An overview of implementing security and privacy in federated learning, 2024, 57, 1573-7462, 10.1007/s10462-024-10846-8
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1430) PDF downloads(73) Cited by(1)

Figures and Tables

Figures(4)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog