Loading [MathJax]/jax/output/SVG/jax.js
Research article

Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input

  • Received: 27 December 2024 Revised: 11 February 2025 Accepted: 04 March 2025 Published: 13 March 2025
  • In this paper, we study the privacy-preserving distributed optimization problem on directed graphs, aiming to minimize the sum of all agents' cost functions and protect the sensitive information. In the distributed optimization problem of directed graphs, agents need to exchange information with their neighbors to obtain the optimal solution, and this situation may lead to the leakage of privacy information. By using the state decomposition method, the algorithm ensures that the sensitive information of the agent will not be obtained by attackers. Before each iteration, each agent decomposes their initial state into two sub-states, one sub-state for normal information exchange with other agents, and the other sub-state is only known to itself and invisible to the outside world. Unlike traditional optimization algorithms applied to directed graphs, instead of using the push-sum algorithm, we introduce the external input, which can reduce the number of communications between agents and save communication resources. We prove that in this case, the algorithm can converge to the optimal solution of the distributed optimization problem. Finally, a numerical simulation is conducted to illustrate the effectiveness of the proposed method.

    Citation: Mengjie Xu, Nuerken Saireke, Jimin Wang. Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input[J]. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067

    Related Papers:

    [1] Seyha Ros, Prohim Tam, Inseok Song, Seungwoo Kang, Seokhoon Kim . A survey on state-of-the-art experimental simulations for privacy-preserving federated learning in intelligent networking. Electronic Research Archive, 2024, 32(2): 1333-1364. doi: 10.3934/era.2024062
    [2] Yunfei Tan, Shuyu Li, Zehua Li . A privacy preserving recommendation and fraud detection method based on graph convolution. Electronic Research Archive, 2023, 31(12): 7559-7577. doi: 10.3934/era.2023382
    [3] Zhuang Wang, Renting Liu, Jie Xu, Yusheng Fu . FedSC: A federated learning algorithm based on client-side clustering. Electronic Research Archive, 2023, 31(9): 5226-5249. doi: 10.3934/era.2023266
    [4] Youqun Long, Jianhui Zhang, Gaoli Wang, Jie Fu . Hierarchical federated learning with global differential privacy. Electronic Research Archive, 2023, 31(7): 3741-3758. doi: 10.3934/era.2023190
    [5] Shuang Yao, Dawei Zhang . A blockchain-based privacy-preserving transaction scheme with public verification and reliable audit. Electronic Research Archive, 2023, 31(2): 729-753. doi: 10.3934/era.2023036
    [6] Rajakumar Ramalingam, Saleena B, Shakila Basheer, Prakash Balasubramanian, Mamoon Rashid, Gitanjali Jayaraman . EECHS-ARO: Energy-efficient cluster head selection mechanism for livestock industry using artificial rabbits optimization and wireless sensor networks. Electronic Research Archive, 2023, 31(6): 3123-3144. doi: 10.3934/era.2023158
    [7] Bao-Lin Ye, Peng Wu, Lingxi Li, Weimin Wu . Uniformity of markov elements in deep reinforcement learning for traffic signal control. Electronic Research Archive, 2024, 32(6): 3843-3866. doi: 10.3934/era.2024174
    [8] Zhenzhong Xu, Xu Chen, Linchao Yang, Jiangtao Xu, Shenghan Zhou . Multi-modal adaptive feature extraction for early-stage weak fault diagnosis in bearings. Electronic Research Archive, 2024, 32(6): 4074-4095. doi: 10.3934/era.2024183
    [9] Xiongfei Li, Shuyu Li, Hao Xu, Yixuan Zhang . A differentially private distributed collaborative XGBoost method. Electronic Research Archive, 2024, 32(4): 2865-2879. doi: 10.3934/era.2024130
    [10] Lot-Kei Chou, Siu-Long Lei . High dimensional Riesz space distributed-order advection-dispersion equations with ADI scheme in compression format. Electronic Research Archive, 2022, 30(4): 1463-1476. doi: 10.3934/era.2022077
  • In this paper, we study the privacy-preserving distributed optimization problem on directed graphs, aiming to minimize the sum of all agents' cost functions and protect the sensitive information. In the distributed optimization problem of directed graphs, agents need to exchange information with their neighbors to obtain the optimal solution, and this situation may lead to the leakage of privacy information. By using the state decomposition method, the algorithm ensures that the sensitive information of the agent will not be obtained by attackers. Before each iteration, each agent decomposes their initial state into two sub-states, one sub-state for normal information exchange with other agents, and the other sub-state is only known to itself and invisible to the outside world. Unlike traditional optimization algorithms applied to directed graphs, instead of using the push-sum algorithm, we introduce the external input, which can reduce the number of communications between agents and save communication resources. We prove that in this case, the algorithm can converge to the optimal solution of the distributed optimization problem. Finally, a numerical simulation is conducted to illustrate the effectiveness of the proposed method.



    Distributed optimization, as a core technology for solving large-scale problems in networked systems, has received widespread attention in recent years. By decomposing the global problem into local sub-problems of each agent and achieving the overall optimal solution by coordination, its algorithm design includes gradient descent, dual decomposition technology, and the alternating direction multiplier method (ADMM). These methods have been successfully applied in fields such as smart grids, communication networks, and machine learning due to their excellent parallelism and convergence [1,2,3]. In addition, in order to adapt to dynamic environments, researchers have proposed dynamic consistency optimization methods to cope with changes in network topology and data distribution [4,5]. At the same time, the scalability and security issues of distributed optimization in large-scale computing have also become the focus of research [6,7].

    Distributed optimization is widely applied, especially in fields such as energy, machine learning, healthcare, and communications. In smart grids, it optimizes energy management by distributed resource allocation and load scheduling, improving system's efficiency and resilience [8,9,10,11]. In machine learning, distributed optimization provides parallelization support for large-scale model training, significantly improves computing efficiency, and especially shows great potential in the field of federated learning [12,13,14]. In addition, medical imaging and diagnostics have also benefited from the introduction of distributed optimization technologies that not only improve data processing capabilities but also ensure the security of patients' data with privacy-preserving algorithms [15,16]. In the field of the Internet of Things (IoT), distributed optimization technology is widely used in resource allocation and dynamic routing decisions, providing strong support for efficient communication [17,18]. The study of [19] alleviates the network congestion and bandwidth utilization issues caused by the quality of service queuing mechanism and denial of service attacks by introducing new compression rules, and develops an intelligent trigger controller supervised by the mini-batch machine learning algorithm. These application examples fully demonstrate that distributed optimization has wide applicability and strong theoretical support.

    However, the communication characteristics of distributed optimization bring significant privacy risks, especially in scenarios involving sensitive data such as medical, financial, and user behavior modeling. Differential privacy technology effectively improves the anonymity of data by injecting noise during the optimization process while minimizing the impact on the model's performance [20,21]. Existing research has proven that using differential privacy mechanisms in distributed optimization can significantly reduce the risk of data leakage while maintaining computational efficiency [22,23]. The authors of [24] designed a distributed algorithm based on the direction and state perturbation. This algorithm achieves differential privacy by perturbing the state variables and directions with attenuated Laplacian noise. The authors of [25] proposed a new algorithm for decentralized non-convex optimization. This algorithm can simultaneously achieve strict differential privacy and the avoidance of saddle points/maxima. Some studies also combine local differential privacy and global differential privacy methods to further optimize the privacy protection effect [26,27,28]. However, the added noise may prevent the agents from converging to an exact solution. Homomorphic encryption technology ensures the security of the entire optimization process by allowing calculations to be performed directly on encrypted data [29,30]. The authors of [31] proposed a privacy-preserving decentralized optimization method based on the alternating direction method of multipliers and partial homomorphic encryption technology. The authors of [32] proposed a new algorithm through homomorphic encryption technology. This algorithm can achieve secure multi-party computation with complete correctness. Compared with differential privacy, homomorphic encryption has unique advantages in high-security scenarios, but has the problem of higher computational cost.

    In addition to the two methods above, [33] developed a new privacy protection method, namely state decomposition. The main idea of the state decomposition method is to decompose the state of each agent into an actual state and a virtual state. The actual state interacts with other agents normally to exchange information, while the virtual state is unknown to the outside world, thereby protecting the real data of the agent from being accessed. This method has been widely used in the field of privacy protection; for example, [34] proposed a state decomposition method that can privately achieve average consensus without any trustworthy neighbor agents. Moreover, [35] proposed a privacy-preserving push-sum method for directed networks. However, when using the push-sum algorithm, each agent needs to maintain two state values, which will increase the communication overhead. In this paper, on the basis of the state decomposition method, we study the privacy-preserving distributed optimization problem on a directed graph. The contributions of this paper are summarized as follows.

    1) The privacy-preserving algorithm is proposed, which is based on state decomposition for the distributed optimization problem on a directed graph. In this algorithm, the initial state of each agent is decomposed into two sub-states: the actual state and the virtual state. The actual state is used for information exchange between agents, while the virtual state is only used for internal information exchange and is unknown to the outside world.

    2) Many existing distributed optimization algorithms applied to directed graphs use the push-sum algorithm. They decompose the state of the agent into two states for iteration, and finally take the ratio of these two states as the true value. The problem with this is that it will increase the communication burden of the system, so we did not use the push-sum algorithm, but introduced external input during the iteration process.

    3) We also demonstrate that the algorithm can protect the private information of agents from being obtained by malicious external eavesdroppers or honest but curious neighbors.

    The remaining part of this paper is structured as follows. Section 2 presents some preliminaries. Section 3 is our main results, where the algorithm is proposed and the proof is given. In Section 4, a numerical example is illustrated. Finally, a brief conclusion is given in Section 5.

    Notations: Let 1d and 0d be the d-dimensional all-one vector and all-zero vector, respectively. Rd is the set of d-dimensional real numbers. Id and Od denote the d×d-demensional identity matrix and all-zero matrix, respectively. Given a matrix X, the element located at the i-th row and j-th column of the matrix is represented by Xij. We use the notation ρ(A) to stand for the spectral radius of matrix A. A matrix A is referred to as row-stochastic or column-stochastic when the sum of all elements in each row or each column amounts to 1, and all the entries of A are non-negative. The symbol represents the Kronecker product, while 2 represents the 2-norm.

    We consider a directed graph G=(V,E,A) which comprises n agents. Here, V={1,2,,n} stands for the set of agents and EV×V represents the set of edges. The adjacency matrix A=[aij] indicates the coupling weights, where aij>0 if there is an edge from i to j, that is, (i,j)E, and aij=0 otherwise. When we use (j,i)E, it implies that there is a communication link enabling agent i to transmit information to Agent j. The agents capable of directly sending information to Agent i are referred to as the in-neighbors of Agent i, and the collection of such agents is denoted as Ni={jV|(i,j)E}. Likewise, the agents that can directly receive messages from agent i are called the out-neighbors of Agent i, and the set of these agents is denoted as N+i={jV|(i,j)E}.

    Consider an optimization issue within a multiagent system that consists of n agents. Every single agent possesses a private cost function fi, and this function is known solely to Agent i. The common objective of all the agents involved is to minimize a global objective function

    minxRdni=1fi(x), (2.1)

    where x is the global decision variable.

    The subsequent assumptions concerning the objective function and the graph are provided, which will be utilized for deriving the main results.

    Assumption 1. (Connectivity): The directed graph G is strongly connected.

    Assumption 2. (Strong convexity): The global objective function, f, is μ-strongly convex, i.e.,

    f(x)f(y)+f(y)T(xy)+μ2xy22,

    for any x and yRd, where μ>0.

    Given Assumption 2, Problem (2.1) possesses a sole optimal solution, xRd.

    Assumption 3. (Smoothness): The gradient of each fi is L-Lipschitz continuous, i.e., for any x and yRd,

    fi(x)fi(y)2Lxy2.

    The state decomposition method was initially proposed in [33]. The core concept of this method is to decompose the initial state xi,0 of each agent into two sub-states, denoted xαi,0 and xβi,0 respectively, as shown in Figure 1. The values of the sub-states xαi,0 and xβi,0 can be any real numbers, yet they must satisfy the equation xαi,0+xβi,0=2xi,0. After decomposition, the sub-state xαi,0 assumes the role that the original state xi,0 played in the interactions among agents. The other sub-state xβi,0 does not engage in the information exchange among neighboring agents. Instead, it only communicates with xαi,0. For instance, consider Agent i in Figure 1(b). In the interactions between agents, xαi behaves exactly like xi, while xβi is hidden from all agents except Agent i, even though it has an impact on the evolution of xαi.

    Figure 1.  State decomposition method.

    Within this part, we put forward a privacy-preserving algorithm for directed graphs via state decomposition and external input. In addition, we also provide the convergence analysis and privacy analysis.

    After the state decomposition, the update equation becomes

    {xαi,k+1=aiixαi,k+jNiaijxαj,k+αixβi,k,xβi,k+1=piixβi,k+βixαi,k, (3.1)

    The coupling weights between the two substates xαi,k and xβi,k are not symmetric. They are denoted αi and βi respectively. The update weight for the substate xβi,k is designated as pii. The weight of the outgoing link from Agent j to Agent i is represented by aij.

    Definition 1. [36] (Minimal polynomial of a matrix): For matrix P, its minimal polynomial is denoted as Q(t), which is expressed as Q(t)=tD+1+Di=0ωiti. Here, Q(t) is a monic polynomial. It has the minimum degree of D+1 and satisfies the condition that when we substitute the matrix P into Q(t), we get Q(P)=0n. In this polynomial, ωi represents the coefficients.

    Definition 2. [36] (Minimal polynomial of a matrix pair): For the object [P,eTi], there is an associated minimal polynomial, which we denote as Qi(t). It is given by the expression Qi(t)=tDi+1+Dii=0ωi,jti=0, where the coefficients ωi,j belong to the set of real numbers R. This Qi(t) is a monic polynomial. It has the minimum degree of Di+1 and fulfills the condition that when we perform the operation eTiQi(P), the result is 0.

    On the basis of the analysis of [36], we let xα2k=[xαi,1,xαi,2,,xαi,2k+1]T. In addition, we define the Hankel matrix and the difference vectors as follows:

    Γ{(xα2k)T}[xαi,1xαi,2xαi,k+1xαi,2xαi,3xαi,k+2xαi,k+1xαi,k+2xαi,2k+1],
    ˉxα2k[xαi,2xαi,1,,xαi,2k+2xαi,2k+1]T.

    In [36], a distributed termination mechanism was put forward. This mechanism enables all agents to reach an agreement on when to end their iterations, provided that they have all finished computing the average. The specific steps are as follows.

    1) When start the iteration (3.1), each Agent i simultaneously starts two counters, namely ci and ri, both initialized to 0. The counter ci increases by one with each passing time step, which can be expressed as ci,k+1=ci,k+1. The subsequent text will elaborate on how the counter ci is updated.

    2) Simultaneously with the iteration (3.1), a max-consensus algorithm is also launched, which is expressed as

    θi,k+1=maxiNii{max{θi,k,ci,k}}, (3.2)

    with θi,0=0. After that, the update rule for ri is as follows:

    ri,k+1={0, if θi,k+1θi,k,ri,k+1,otherwise. (3.3)

    3) Once the Hankel matrix Γ{(ˉxαDi)T} loses rank, Agent i records the value of counter ci at that particular time step, which we denote as k0i, and names it c0i. In other words, c0i is defined as ci[k0i]. Subsequently, Agent i halts the increment of the counter, i.e., for all kk0i, we have c[k]=ci[k0i]=c0i. It should be noted that c0i=2(Di+1)+1.

    4) Agent i is able to terminate the iteration (3.1) as soon as ri reaches the value of c0i.

    We now design Algorithms 1 and 2.

    Algorithm 1 A privacy-preserving finite-time algorithm via state decomposition and external input
    1: Initialization: Agent iV initializes the weight value of state variable xi, which needs to satisfy jNi{i}aij+αi=1 and pii+βi=1.
    2: if k=0 then
    3: Run the following iteration and (3.2)(uαi,k and uβi,k are the added external inputs, and their specific values will be given later in the text), store the vector (ˉxαDi)T, increase the value of the counter ci,k and determine the value of the counter ri,k via (3.3).
          {xαi,k+1=aii(xαi,k+uαi,k)+jNiaij(xαj,k+uαi,k)+αi(xβi,k+uβi,k)xβi,k+1=pii(xβi,k+uβi,k)+βi(xαi,k+uαi,k)              (3.4)
    4: Expand the dimension of the Hankel matrix Γ{(ˉxαDi)T} continuously until reaching the value of k0i, when it becomes rank-deficient. Once this happens, store the value c0i=2(Di+1)+1.
    5: Keep performing iteration (3.4) until the iteration reaches step ki,t, at which point, ri(ki,t)=c0i. Then, save the value Dmax=(ki,t2Di2)/21.
    6: else
    7: Run (3.4) for kmax=Dmax+2 steps.
    8: Calculate the average value as ˆxavei=ni=1xαi+ni=1xβi2n.
    9: output: Agent iV outputs ˆxavei.

    Algorithm 2 A privacy-preserving finite-time based gradient descent (GD) algorithm
    1: Initialization: Step size η, maximum optimization iteration number T, Agent iV initializes the value xi(0), sets yi,0=fi(xi,0), t=0.
    2: for tT do
    3: Take fi(xi,t) as the input to Algorithm 1 and obtain the output ˆxavei. Then, define yi,t=ˆxavei.
    4: Calculate xi,t+1 with yi,t as follows:
            xi,t+1=ˉaiixi,t+jNiˉaijxj,tηyi,t,              (3.5)
      where A=[ˉaij]Rn is row-stochastic.
    5: output: Agent iV obtains the solution x.

    In this part, we present the proof regarding the convergence and accuracy of Algorithms 1 and 2.

    Let xαk=[xα1,k,xα2,k,,xαn,k]T be the state vector of sub-agent α, xβk=[xβ1,k,xβ2,k,,xβn,k]T be the state vector of sub-agent β, and xk=[xα1,k,xα2,k,,xαn,k,xβ1,k,xβ2,k,,xβn,k]TR2n be the overall state vector.

    Then each Agent i sets two new auxiliary vectors, si,k=[si1,k,si2,k,,si2n,k]TR2n, sn+i,k=[sn+i1,k,sn+i2,k,,sn+i2n,k]TR2n, with the initial state component sij,0=1 if i=j, 0 otherwise. We choose the same evolution updating rule as Eq (3.1), in which case, the evolutions of these two variables can be represented as shown below.

    {si,k+1=aiisi,k+jNiaijsj,k(k)+αisn+i,ksn+i,k+1=piisn+i,k+βisi,k (3.6)

    The external input of Agent i is calculated as

    {uαi,k=Fαi,kFαi,k1uβi,k=Fβi,kFβi,k1

    with:

    {Fαi,k=xαi,0(12nsii,k2nsii,k)Fβi,k=xβi,0(12nsii,k2nsii,k)

    and Fαi,1=0,Fβi,1=0.

    We now use a lemma to illustrate the convergence of si,k and the specific convergence value.

    Lemma 1. [37] Assume that the initial state variable si,0=[si1,0,si2,0,,si2n,0]TR2n with sij,0=1 if i=j; 0 otherwise. After updating by Eq (3.6), limksi,k=ψ=[ψ1,ψ2,,ψ2n]T, where ψ is the normalized left eigenvector of the matrix M=[M1M2M3M4], with M1=[aij]Rn×n,M2=diag(α1,α2,,αn)Rn×n, M3=diag(β1,β2,,βn)Rn×n,M4=diag(p11,p22,,pnn)Rn×n. And we have limkMk=(ρ(M))k12nψT=12NψT.

    We now use a lemma to illustrate the properties of uαi,k,uβi,k.

    Lemma 2. [37] For the external inputs uαi,k,uβi,k in the system, we have

    limkkt=0uli,t=limkkt(Fli,tFli,t1)=limkFli,k=xli,02nψixli,0,l=α,β.

    We write the iteration (3.4) in the form of matrices and vectors:

    xk+1=M(xk+uk), (3.7)

    where uk=[uα1,uα2,,uαN,uβ1,uβ2,,uβn]T.

    Moreover, Equation (3.5) can be rewritten as

    xt+1=Axtηyt, (3.8)

    where xt=[x1,t,x2,t,,xn,t]T and yt=[y1,t,y2,t,,yn,t]T. Next, we give some necessary lemmas.

    Lemma 3. [38] Given Assumption 1, the matrix A possesses a sole non-negative left eigenvector v (corresponding to the eigenvalue 1) such that vT1n=n.

    Lemma 4. [38] Given Assumptions 1–3, a matrix norm A exists such that σA=A1nuTnA<1, and σA can be made arbitrarily close to the spectral radius ρ(A1nμTn)<1.

    Now, we give Theorem 1.

    Theorem 1. Given Assumptions 1–3, for each Agent iV,

    1) The output of Algorithm 1 is precisely the average of the initial values of all agents. That is, for every iV, ˆxavei=1nni=1xi,0.

    2) When 0<η<1μ+L, where μ and L are as defined in Assumptions 2 and 3, respectively, Algorithm 2 converges linearly with respect to the number of optimization iterations to the global optimum, i.e., xt1nx2 converges to 0 in a linear fashion.

    Proof. (1) Expanding Eq (3.7), we have

    xk+1=M(xk+uk)=M(M(xk1+uk1))+Muk==Mk+1x0+Mk+1u0+Mku1++M2uk1+Muk=Mk+1x0+kt=0Mk+1tut. (3.9)

    Let Fk=[Fα1,k,Fα2,k,,FαN,k,Fβ1,k,Fβ2,k,,FβN,k]T. According to Lemmas 1, 2, we have

    limk(M)k+1=12nψT=[ψT,ψT,,ψT]TR2n×2n, (3.10)

    and

    limkkt=0Mk+1tut=limkkt=0ψTut=ψTlimkFk=ψT[xα1,02nψ1xα1,0,,xαn,02nψnxαn,0,xβ1,02nψn+1xβ1,0,,xβn,02nψ2nxβn,0]T. (3.11)

    On the basis of Eq (3.11), we can know that when k, kt=0Mk+1tut is a constant. We use C to represent it, and thus we have

    limkxk+1=limkMk+1x0+limkkt=0Mk+1tut=[ψT,ψT,,ψT]Tx0+C,i.e.
    limk[xα1,k+1xαN,k+1xβ1,k+1xβN,k+1]=[ψ1ψ2nψ1ψ2n][xα1,0xαn,0xβ1,0xβn,0]+C,

    which can show that limkxαi,k+1=limkxβi,k+1.

    Therefore, we have

    limkxαi,k+1=limkxβi,k+1=ψTx0ψTlimkFk=xα1,02n+xαn,02n+xβ1,02n+xβn,02n=2(x1,0++xn,0)2n=1nni=1xi,0. (3.12)

    We thus have

    ˆxavei=ni=1xαi+ni=1xβi2n=1nni=1xi,0. (3.13)

    (2) We write ¯xt=vTxt/n, ¯yt=1nni=1fi(xi,t) and gt=1nni=1fi(¯xt). From the analysis above, we know that at the time of iteration t, each agent can obtain the average gradient at time k via Algorithm 1, i.e., yi,t=ˉyt,yt=1nˉyt. Hence, from the iteration (3.8), we can obtain

    ˉxt+1x=ˉxtηˉytx=ˉxtηgtxη(ˉytgt),
    xt+11nˉxt+1=Axtηyt1nˉxt+η1nˉyt=(A1nvT/n)(xt1nˉxt).

    If η<1/(μ+L), we have

    ||ˉxt+1x||2(1ημ)||ˉxtx||2+η||ˉytgt||(1ημ)||ˉxtx||2+ηLn||xt1nˉxt||2. (3.14)

    Moreover, from the result of Lemma 4, we can obtain

    ||ˉxt+1x||2(1ημ)||ˉxtx||2+ηqLn||xt1nˉxt||A.

    Taking Vt=[||ˉxtx||2,||xt1nˉxt||A]T, we have

    Vt+1PVt, (3.15)

    where the transition matrix P=(1ημηqL/n0σA).

    Since 0<1ημ<1 and 0<σA<1, we can see that the spectral radius of P is strictly smaller than 1 and therefore ||xt1nx||2 converges to zero linearly.

    Remark 1. The algorithm we proposed can achieve the optimal solution to the distributed optimization problem. It can be seen that, unlike the push-sum algorithm, where each agent needs to maintain two states and exchange more information during the process of information exchange with neighboring agents, our algorithm requires less information to be exchanged, which obviously reduces the communication cost.

    In this part, we provide a detailed privacy analysis of Algorithm 1 in terms of how it can resist "attacks" from curious agents (in the network) and eavesdroppers (outside the network). First, we give detailed descriptions of curious and eavesdropping attack models[25].

    1) Honest-but-curious attacks refer to situations where one or more participating agents (whether they collude or not) accurately adhere to every step of the protocol. Nevertheless, they are inquisitive and gather all the received intermediate data with the intention of uncovering sensitive information about other participating agents.

    2) Eavesdropping attacks are carried out by an adversary who manages to gain access to the information shared with other agents by infiltrating all the communication channels. Consequently, eavesdroppers are aware of the network's topology and all the data shared within the network. However, the state variables that are not shared will not become known to them.

    Similar to the analysis in [36], we define ΔIA(xp,i)={ˉxp,i|IA(0:K)}; this set encompasses all the possible states associated with xp,i when the information set accessible to A is IA(0:K).

    The diameter of ΔIA(xp,i) is defined as

    Diam{ΔIA(xp,i)}=supˉxp,i,ˉxp,iΔIA(xp,i)|ˉxp,iˉxp,i|,

    where ˉxp,i and ˉxp,i are two disparate states which are part of the set ΔIA(xp,i).

    Now, we present Theorem 2 in the following paragraphs.

    Theorem 2. Given Assumption 1, for every Agent iV, the privacy of Agent i can be preserved under Algorithm 1:

    (1) For a set of honest-but-curious agents N, as long as at least one neighbor of Agent i is not in the set N, the privacy information of Agent i can be protected from being obtained by the set N.

    (2) For an eavesdropper R, if there is an edge εmi or εji that the eavesdropper R cannot eavesdrop on, then the privacy information of Agent i can be protected from being obtained by the eavesdropper R.

    Proof. Since, under Algorithm 1, the maximum communication round equals k1, the information set IN(0:k1) and IR(0: k1) denote all the information accessible to the adversary. From Algorithm 2, it can be seen that the private information fi(xi,t),t0 is regarded as the input of Algorithm 1. Thus, it is enough to demonstrate that the privacy of the initial value xi,0 of Agent i can be preserved under Algorithm 1.

    (1) We use a method similar to that in [33] to prove that honest-but-curious neighbors cannot distinguish any changes in the neighbor's initial value. More specifically, under the following initial condition:

     ˆxαi,0=xαi,0,ˆxβi,0=xβi,0+2Δ, ˆxαm,0=xαm,0,ˆxβm,0=xβm,02Δ, ˆxαr,0=xαr,0,ˆxβr,0=xβr,0. (3.16)

    We set the following weights:

    ˆaii=ˆaii,ˆaij=aij,ˆaim=aim,ˆαi=αi(xβi,0+uβi,0)xβi,0+uβi,0+2Δ, ˆamm=amm,ˆami=ami,ˆamj=amj,ˆαm=αm(xβm,0+uβm,0)xβm,0+uβm,02Δ, ˆpii=pii(xβi,0+uβi,0)2Δ+xβi,0+uβi,0,ˆpmm=pmm(xβm,0+uβm,0)xβm,0+uβm,02Δ, ˆβi=βi,ˆβm=βm,ˆarr=arr,ˆarq=arq,ˆαr=αr,ˆprr=prr,ˆβr=βr,

    where rV{i,m},qV, Δ is an arbitrary real number, and "" represents set subtraction. The external input uli,k,l=α,β is invariant for iV, it can be easily verified that ˆxαi,1=xαi,1,ˆxαm,1=xαm,1. Moreover, due to the weight values remaining identical for k1, it is easy to see that the updates of all agents are same, namely, ˆxαi,k=xαi,k,iV. Thus, the information received by Agent j remains consistent even when the real initial state of Agent i changes. We now have

    Diam(Δi(IN(0:k1)))supΔR|xi,0(xi,0+Δ)|=.

    From the definition in [36], we have proved the first statement.

    (2) For the second statement, because of the topological limitations, the eavesdropper R is unable to eavesdrop on εmi or εim, where mN+i or mNi, i.e., ami or aim, is inaccessible to R. Moreover, since the self-weights aii,amm are not transmitted over the communication network, the eavesdropper R can not obtain any information of aim,amm or ami,aii. Subsequently, in a similar vein to the proof of the first statement, we have Diam(Δi(IN(0:k1)))=, i.e., the privacy of xi,0 is preserved.

    Remark 2. The state decomposition method we proposed enhances the privacy protection ability of distributed optimization on directed graphs, but it may also fail under certain circumstances. One potential vulnerability arises when adversaries possess highly correlated auxiliary data, which could enable inference attacks. Additionally, if an attacker continuously observes multiple decomposed states over time, they may reconstruct sensitive information through statistical analysis. Of course, the probability of this situation occurring is extremely low. To mitigate these risks, incorporating differential privacy mechanisms or applying obfuscation techniques to the decomposed states could enhance robustness. Future research will explore these strategies to further strengthen privacy guarantees in adversarial environments.

    In this part, we validate the convergence and privacy of the proposed algorithm by exemplary simulations. Consider a directed graph consisting of five agents, that is, n=5, as shown in Figure 2. The task of each agent is to find the nearest gathering point x by cooperating with other agents without exposing her/his initial position pi,iV. The objective function of Agent i is described by fi(x)=12xpi22, i.e.,

    minf(x)=1nni=1fi(x)=155i=112xpi22.
    Figure 2.  Digraph of five agents.

    We suppose that the initial state values of the agents are x1,0=10, x2,0=20, x3,0=30, x4,0=40, and x5,0=50, and select the weight values between the agents reasonably.

    The variation in the variable xk=[x1,k,x2,k,,x5,k]T with respect to the iteration k is shown in Figure 3. It is observable that, with regard to these five agents, by applying Algorithms 1 and 2, xk is capable of converging to the optimal solution x successfully.

    Figure 3.  The variation in xk1nx2 with respect to the iteration k.

    In addition, we need to prove that our algorithm can successfully save communication resources compared with algorithms like the one using push-sum in [36]. We record the amount of data that Agent 1 needs to send to their neighbors throughout the iteration process. As shown in Figure 4, the amount of data our algorithm needs to send during the iteration is lower. The same applies to other agents. This fully demonstrates that our algorithm can successfully save communication resources.

    Figure 4.  Record of the amount of data.

    In the end, we assume that there is an eavesdropper that wants to obtain the information of Agent 1, where z1,k represents the information of Agent 1 obtained by the eavesdropper. When the privacy-preserving algorithm is not used, the result is shown in Figure 5(a). Although the convergence effect can eventually be achieved, the eavesdropper can successfully obtain the information of Agent 1. After using Algorithm 1, the convergence situation is shown in Figure 5(b). It is observable that not only can it converge, but the eavesdropper cannot steal the initial value information of Agent 1, and thus the privacy of Agent 1 is protected.

    Figure 5.  The observation results of external eavesdroppers.

    In this paper, we proposed a novel privacy-preserving distributed optimization algorithm for directed graphs. By leveraging a state decomposition method, our approach ensures the protection of sensitive information while enabling agents to collaboratively minimize the sum of cost functions. Unlike traditional methods, the introduction of external inputs effectively reduces the communication overhead, enhancing efficiency without compromising convergence to the optimal solution. Numerical simulations demonstrate the effectiveness and practicality of the proposed method, confirming its potential for addressing privacy concerns in distributed optimization problems on directed graphs. Future work involves enhancing the optimization accuracy and addressing constrained optimization problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The work was supported by the National Natural Science Foundation of China under Grants 62203045 and 62433020. The material in this paper was not presented at any conference.

    The authors declare there are no conflicts of interest.



    [1] A. Priolo, A. Gasparri, E. Montijano, C. Sagues, A distributed algorithm for average consensus on strongly connected weighted digraphs, Automatica, 50 (2014), 946–951. https://doi.org/10.1016/j.automatica.2013.12.026 doi: 10.1016/j.automatica.2013.12.026
    [2] B. Houska, J. Frasch, M. Diehl, An augmented Lagrangian based algorithm for distributed nonconvex optimization, SIAM J. Optim., 26 (2016), 1101–1127. https://doi.org/10.1137/140975991 doi: 10.1137/140975991
    [3] R. Mohebifard, A. Hajbabaie, Distributed optimization and coordination algorithms for dynamic traffic metering in urban street networks, IEEE Trans. Intell. Transp. Syst., 20 (2019), 1930–1941. https://doi.org/10.1109/TITS.2018.2848246 doi: 10.1109/TITS.2018.2848246
    [4] G. Chen, Q. Yang, Distributed constrained optimization for multiagent networks with nonsmooth objective functions, Syst. Control Lett., 124 (2019), 60–67. https://doi.org/10.1016/j.sysconle.2018.12.005 doi: 10.1016/j.sysconle.2018.12.005
    [5] M. Tajalli, A. Hajbabaie, Distributed optimization and coordination algorithms for dynamic speed optimization of connected and autonomous vehicles in urban street networks, Transp. Res. Part C Emerging Technol., 95 (2018), 497–515. https://doi.org/10.1016/j.trc.2018.07.012 doi: 10.1016/j.trc.2018.07.012
    [6] J. Zhou, R. Q. Hu, Y. Qian, Scalable distributed communication architectures to support advanced metering infrastructure in smart grid, IEEE Trans. Parallel Distrib. Syst., 23 (2012), 1632–1642. https://doi.org/10.1109/TPDS.2012.53 doi: 10.1109/TPDS.2012.53
    [7] F. Lai, Y. Dai, S. Singapuram, J. Liu, X. Zhu, H. Madhyastha, et al., Fedscale: benchmarking model and system performance of federated learning at scale, in Proceedings of the 39th International Conference on Machine Learning, 162 (2022), 11814–11827. https://doi.org/10.48550/arXiv.2105.11367
    [8] Z. Chen, Y. Wang, Privacy-preserving distributed optimization and learning, preprint, arXiv: 2403.00157, 2024. https://doi.org/10.48550/arXiv.2403.00157
    [9] P. Braun, L. Grüne, C. M. Kellett, S. R. Weller, K. Worthmann, A distributed optimization algorithm for the predictive control of smart grids, IEEE Trans. Autom. Control, 61 (2016), 3898–3911. https://doi.org/10.1109/TAC.2016.2525808 doi: 10.1109/TAC.2016.2525808
    [10] W. Chen, Z. Wang, Q. Liu, D. Yue, G. P. Liu, A new privacy-preserving average consensus algorithm with two-phase structure: Applications to load sharing of microgrids, Automatica, 167 (2024), 111715. https://doi.org/10.1016/j.automatica.2024.111715 doi: 10.1016/j.automatica.2024.111715
    [11] W. Chen, L. Liu, G. P. Liu, Privacy-preserving distributed economic dispatch of microgrids: A dynamic quantization-based consensus scheme with homomorphic encryption, IEEE Trans. Smart Grid, 14 (2022), 701–713. https://doi.org/10.1109/TSG.2022.3189665 doi: 10.1109/TSG.2022.3189665
    [12] W. Zhang, D. Yang, W. Wu, H. Peng, N. Zhang, H. Zhang, et al., Optimizing federated learning in distributed industrial IoT: A multi-agent approach, IEEE J. Sel. Areas Commun., 39 (2021), 3688–3703. https://doi.org/10.1109/JSAC.2021.3118352 doi: 10.1109/JSAC.2021.3118352
    [13] Z. Hu, K. Shaloudegi, G. Zhang, Y. Yu, Federated learning meets multi-objective optimization, IEEE Trans. Network Sci. Eng., 9 (2022), 2039–2051. https://doi.org/10.1109/TNSE.2022.3169117 doi: 10.1109/TNSE.2022.3169117
    [14] T. Wang, Y. Liu, X. Zheng, H. N. Dai, W. Jia, M. Xie, Edge-based communication optimization for distributed federated learning, IEEE Trans. Network Sci. Eng., 9 (2021), 2015–2024. https://doi.org/10.1109/TNSE.2021.3083263 doi: 10.1109/TNSE.2021.3083263
    [15] A. Mang, A. Gholami, C. Davatzikos, G. Biros, PDE-constrained optimization in medical image analysis, Optim. Eng., 19 (2018), 765–812. https://doi.org/10.1007/s11081-018-9390-9 doi: 10.1007/s11081-018-9390-9
    [16] V. Nikitin, V. D. Andrade, A. Slyamov, B. J. Gould, Y. Zhang, V. Sampathkumar, et al., Distributed optimization for nonrigid nano-tomography, IEEE Trans. Comput. Imaging, 7 (2021), 272–287. https://doi.org/10.1109/TCI.2021.3060915 doi: 10.1109/TCI.2021.3060915
    [17] Y. Gao, M. Kim, C. Thapa, A. Abuadbba, Z. Zhang, S. Camtepe, et al., Evaluation and optimization of distributed machine learning techniques for internet of things, IEEE Trans. Comput., 71 (2021), 2538–2552. https://doi.org/10.1109/TC.2021.3135752 doi: 10.1109/TC.2021.3135752
    [18] R. Anand, D. Pandey, D. N. Gupta, M. K. Dharani, N. Sindhwani, J. V. N. Ramesh, Wireless sensor-based IoT system with distributed optimization for healthcare, in Meta Heuristic Algorithms for Advanced Distributed Systems, Wiley Data and Cybersecurity, (2024), 261–288. https://doi.org/10.1002/9781394188093.ch16
    [19] X. Cai, K. Shi, Y. Sun, J. Cao, S. Wen, C. Qiao, et al., Stability analysis of networked control systems under DoS attacks and security controller design with mini-batch machine learning supervision, IEEE Trans. Inf. Forensics Secur., 19 (2023). 3857–3865. https://doi.org/10.1109/TIFS.2023.3347889 doi: 10.1109/TIFS.2023.3347889
    [20] E. Nozari, P. Tallapragada, J. Cortés, Differentially private average consensus: obstructions, trade-offs, and optimal algorithm design, Automatica, 81 (2017), 221–231. https://doi.org/10.1016/j.automatica.2017.03.016 doi: 10.1016/j.automatica.2017.03.016
    [21] Z. Huang, S. Mitra, N. Vaidya, Differentially private distributed optimization, in Proceedings of the 16th International Conference on Distributed Computing and Networking, (2015), 1–10. https://doi.org/10.1145/2684464.268448
    [22] Y. Xiong, J. Xu, K. You, J. Liu, L. Wu, Privacy-preserving distributed online optimization over unbalanced digraphs via subgradient rescaling, IEEE Trans. Control Network Syst., 7 (2020), 1366–1378. https://doi.org/10.1109/TCNS.2020.2976273 doi: 10.1109/TCNS.2020.2976273
    [23] E Nozari, P. Tallapragada, J. Cortés, Differentially private distributed convex optimization via functional perturbation, IEEE Trans. Control Network Syst., 5 (2016), 395–408. https://doi.org/10.1109/TCNS.2016.2614100 doi: 10.1109/TCNS.2016.2614100
    [24] T. Ding, S. Zhu, J. He, C. Chen, Differentially private distributed optimization via state and direction perturbation in multiagent systems. IEEE Trans. Autom. Control, 67 (2021), 722–737. https://doi.org/10.1109/TAC.2021.3059427 doi: 10.1109/TAC.2021.3059427
    [25] Y. Wang, T. Başar, Decentralized nonconvex optimization with guaranteed privacy and accuracy, Automatica, 150 (2023), 110858. https://doi.org/10.1016/j.automatica.2023.110858 doi: 10.1016/j.automatica.2023.110858
    [26] H. Wang, Q. Zhao, Q. Wu, S. Chopra, A. Khaitan, H. Wang, Global and local differential privacy for collaborative bandits, in Proceedings of the 14th ACM Conference on Recommender Systems, (2020), 150–159. https://doi.org/10.1145/3383313.3412254
    [27] W. Lin, B. Li, C. Wang, Towards private learning on decentralized graphs with local differential privacy, IEEE Trans. Inf. Forensic Secur., 17 (2022), 2936–2946. https://doi.org/10.1109/TIFS.2022.3198283 doi: 10.1109/TIFS.2022.3198283
    [28] Y. Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, et al., Local differential privacy-based federated learning for internet of things, IEEE Internet Things J., 8 (2020), 8836–8853. https://doi.org/10.1109/JIOT.2020.3037194 doi: 10.1109/JIOT.2020.3037194
    [29] C. Zhang, Y. Wang, Enabling privacy-preservation in decentralized optimization, IEEE Trans. Control Network Syst., 6 (2018), 679–689. https://doi.org/10.1109/TCNS.2018.2873152 doi: 10.1109/TCNS.2018.2873152
    [30] C. N. Hadjicostis, A. D. Domínguez-García, Privacy-preserving distributed averaging via homomorphically encrypted ratio consensus, IEEE Trans. Autom. Control, 65 (2020), 3887–3894. https://doi.org/10.1109/TAC.2020.2968876 doi: 10.1109/TAC.2020.2968876
    [31] C. Zhang, M. Ahmad, Y. Wang, ADMM based privacy-preserving decentralized optimization, IEEE Trans. Inf. Forensics Secur., 14 (2018), 565–580. https://doi.org/10.1109/TIFS.2018.2855169 doi: 10.1109/TIFS.2018.2855169
    [32] Y. Lu, M. Zhu, Privacy preserving distributed optimization using homomorphic encryption, Automatica, 96 (2018), 314–325. https://doi.org/10.1016/j.automatica.2018.07.005 doi: 10.1016/j.automatica.2018.07.005
    [33] Y. Wang, Privacy-preserving average consensus via state decomposition, IEEE Trans. Autom. Control, 64 (2019), 4711–4716. https://doi.org/10.1109/TAC.2019.2902731 doi: 10.1109/TAC.2019.2902731
    [34] S. Lin, F. Wang, Z. Liu, Z. Chen, Privacy-preserving average consensus via enhanced state decomposition, in 2022 41st Chinese Control Conference (CCC), (2022), 4484–4488. https://doi.org/10.23919/CCC55666.2022.9902777
    [35] X. Chen, L. Huang, K. Ding, S. Dey, L. Shi, Privacy-preserving push-sum average consensus via state decomposition, IEEE Trans. Autom. Control, 68 (2023), 7974–7981. https://doi.org/10.1109/TAC.2023.3256479 doi: 10.1109/TAC.2023.3256479
    [36] X. Chen, W. Jiang, T. Charalambous, L. Shi, A privacy-preserving finite-time push-sum based gradient method for distributed optimization over digraphs, IEEE Control Syst. Lett., 7 (2023), 3133–3138. https://doi.org/10.1109/LCSYS.2023.3292463 doi: 10.1109/LCSYS.2023.3292463
    [37] J. Zhang, J. Q. Lu, C. N. Hadjicostis, Average consensus for expressed and private opinions, IEEE Trans. Autom. Control, 69 (2024), 5627–5634. https://doi.org/10.1109/TAC.2024.3379256 doi: 10.1109/TAC.2024.3379256
    [38] S. Pu, W. Shi, J. Xu, A. Nedic, Push-pull gradient methods for distributed optimization in networks, IEEE Trans. Autom. Control, 66 (2020), 1–16. https://doi.org/10.1109/TAC.2020.2972824 doi: 10.1109/TAC.2020.2972824
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(395) PDF downloads(57) Cited by(0)

Figures and Tables

Figures(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog