Loading [MathJax]/extensions/TeX/boldsymbol.js

Circular economy and environmental protection

  • The circular economy represents a form of corporate production with respect to environmental resources. In the past, these production systems were widespread on the basis of the non-removability of the production factors. The advent of economic growth, in capitalist economies, has led to the deconstruction of production cycles resulting to a food product being produced in one part of the world, whilst the raw materials and processing phases are carried out in several parts of the world, due to the low production costs there. While these economic systems, on the one hand, have led to a growth in the global economic system, on the other hand they have determined the impoverishment of the territory as many companies, at least the uncompetitive ones, have disappeared. In this work, starting from examining the circular economy models, we analyze a development and growth scenario from a circular business perspective. The work highlights that the adoption of circular economy models has higher costs for the companies that implement them and therefore, to become long-term production systems, they need either cooperation among several companies to reduce the average total cost or a potential public contribution in their starting phase. The results of this study highlight that the adoption of circular economy models results in advantages at the microeconomic level. In the event that the cost of the investment cannot determine an advantage at a microeconomic level, one could think of solutions envisaging several companies that adopt a common logic of making the investment in a circular economy. The positive effects occur at the company, family and local levels.

    Citation: Filippo Sgroi. Circular economy and environmental protection[J]. AIMS Environmental Science, 2022, 9(2): 122-127. doi: 10.3934/environsci.2022009

    Related Papers:

    [1] Xinchang Guo, Jiahao Fan, Yan Liu . Maximum likelihood-based identification for FIR systems with binary observations and data tampering attacks. Electronic Research Archive, 2024, 32(6): 4181-4198. doi: 10.3934/era.2024188
    [2] Shuang Yao, Dawei Zhang . A blockchain-based privacy-preserving transaction scheme with public verification and reliable audit. Electronic Research Archive, 2023, 31(2): 729-753. doi: 10.3934/era.2023036
    [3] Yunfei Tan, Shuyu Li, Zehua Li . A privacy preserving recommendation and fraud detection method based on graph convolution. Electronic Research Archive, 2023, 31(12): 7559-7577. doi: 10.3934/era.2023382
    [4] Qingjie Tan, Xujun Che, Shuhui Wu, Yaguan Qian, Yuanhong Tao . Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling. Electronic Research Archive, 2023, 31(11): 7021-7039. doi: 10.3934/era.2023356
    [5] Youqun Long, Jianhui Zhang, Gaoli Wang, Jie Fu . Hierarchical federated learning with global differential privacy. Electronic Research Archive, 2023, 31(7): 3741-3758. doi: 10.3934/era.2023190
    [6] Wanshun Zhao, Kelin Li, Yanchao Shi . Exponential synchronization of neural networks with mixed delays under impulsive control. Electronic Research Archive, 2024, 32(9): 5287-5305. doi: 10.3934/era.2024244
    [7] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [8] Mohammed Alshehri . Blockchain-assisted cyber security in medical things using artificial intelligence. Electronic Research Archive, 2023, 31(2): 708-728. doi: 10.3934/era.2023035
    [9] Mengjie Xu, Nuerken Saireke, Jimin Wang . Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067
    [10] Seyha Ros, Prohim Tam, Inseok Song, Seungwoo Kang, Seokhoon Kim . A survey on state-of-the-art experimental simulations for privacy-preserving federated learning in intelligent networking. Electronic Research Archive, 2024, 32(2): 1333-1364. doi: 10.3934/era.2024062
  • The circular economy represents a form of corporate production with respect to environmental resources. In the past, these production systems were widespread on the basis of the non-removability of the production factors. The advent of economic growth, in capitalist economies, has led to the deconstruction of production cycles resulting to a food product being produced in one part of the world, whilst the raw materials and processing phases are carried out in several parts of the world, due to the low production costs there. While these economic systems, on the one hand, have led to a growth in the global economic system, on the other hand they have determined the impoverishment of the territory as many companies, at least the uncompetitive ones, have disappeared. In this work, starting from examining the circular economy models, we analyze a development and growth scenario from a circular business perspective. The work highlights that the adoption of circular economy models has higher costs for the companies that implement them and therefore, to become long-term production systems, they need either cooperation among several companies to reduce the average total cost or a potential public contribution in their starting phase. The results of this study highlight that the adoption of circular economy models results in advantages at the microeconomic level. In the event that the cost of the investment cannot determine an advantage at a microeconomic level, one could think of solutions envisaging several companies that adopt a common logic of making the investment in a circular economy. The positive effects occur at the company, family and local levels.



    In today's highly interconnected world, data privacy protection has become a critical concern across various application domains. Personal medical records, industrial control parameters, and even trade secrets are increasingly vulnerable to attacks or misuse [1,2,3,4]. When sensitive information is maliciously accessed or tampered with, it can lead not only to significant economic and societal losses but also to a destabilization of system security and stability. Therefore, the effective protection of data privacy and the prevention of unauthorized inferences and misuse of both individual and core information have become central issues of widespread interest in both academic and industrial circles [5,6,7].

    The primary benefit of data privacy protection lies in its ability to intelligently mask, encrypt, or perturb personal and system-critical data, thereby preventing external attackers from reverse-engineering sensitive information. By implementing robust privacy-preserving techniques such as differential privacy, homomorphic encryption, and secure multi-party computation, organizations can effectively minimize security risks while maintaining the integrity and utility of the data. This ensures that individuals' personal privacy, corporate trade secrets, and other confidential information remain safeguarded during data collection, processing, analysis, and sharing. Moreover, a well-designed privacy protection framework fosters user trust, strengthens regulatory compliance, and enhances the overall resilience of digital systems against cyber threats, contributing to a more secure and sustainable data-driven ecosystem.

    Over the years, researchers have developed various strategies for data security and privacy protection from both theoretical and practical perspectives. For instance, Rawat et al., in their exploration of cyber-physical systems (CPS) in smart grids and healthcare systems, emphasized the importance of security measures at both the network and physical layers for ensuring data confidentiality [8]. Ljung's systematic research on system identification laid the foundation for subsequent work on modeling and algorithmic optimization related to privacy protection and assessment [9]. Pouliquen et al. proposed a robust parameter estimation framework for scenarios with limited information based on binary measurements [10]. In a related study, Guo et al. developed an adaptive tracking approach for first-order systems using binary-valued observations with fixed thresholds, enhancing the accuracy of parameter estimation in constrained data environments [11]. The framework of local differential privacy had widely adopted in data statistics and analysis tasks, with comprehensive surveys that of Wang et al. demonstrating its crucial role in frequency estimation and machine learning applications under distributed settings [12]. Moreover, Ding et al. and Liu have each proposed methods for distributed control and networked predictive control in industrial CPS, embedding security and privacy requirements into real-time collaborative control processes [13,14]. In addition, Mahmoud et al. introduced a multi-layer defense approach for network attack modeling and vulnerability assessment, offering innovative solutions for maintaining data confidentiality against diverse threats [15]. Recent work by Taheri et al. explored differential privacy-driven adversarial attacks, like noise injection, to degrade deep learning-based malware detection, proposing defenses to enhance robustness [16]. In contrast, our differential privacy-based FIR system identification algorithm uses Laplace and randomized response mechanisms to encrypt data, ensuring parameter estimation accuracy in cyber-physical systems under privacy and security constraints.

    In the realm of federated learning, data and model poisoning attacks pose significant challenges to the integrity and reliability of distributed systems, particularly impacting the accuracy of system identification processes. Nabavirazavi et al. proposed a randomization and mixture approach to enhance the robustness of federated learning against such attacks, demonstrating improved resilience in parameter estimation under adversarial conditions [17]. Similarly, Taheri et al. introduced robust aggregation functions to mitigate the effects of poisoned data, ensuring more reliable model updates in federated environments [18]. Nowroozi et al. exposed vulnerabilities in federated learning through data poisoning attacks, highlighting their detrimental impact on network security and the need for robust defenses to protect system dynamics [19]. Furthermore, Nabavirazavi et al. explored the impact of aggregation function randomization to counter model poisoning, emphasizing the importance of accurate parameter estimation despite malicious interventions [20]. These studies underscore the importance of accurate parameter estimation in adversarial environments. However, despite these advances in federated learning and privacy-preserving training, less attention has been paid to the fundamental task of system identification—an essential step for capturing system dynamics and ensuring the effectiveness of both attack detection and privacy protection strategies.

    System identification is a key process in ensuring that attack detection and privacy protection measures are fully effective. Given the challenges of data tampering and privacy preservation, this paper focuses on the problem of data privacy protection from the perspective of system identification, specifically targeting the issue of data tampering under binary observation conditions [21,22].

    System identification in the presence of binary observations and malicious tampering presents several challenges, including insufficient observational information, the presence of Gaussian noise combined with tampering interference, and the degradation of accuracy due to privacy noise [1,23]. First, binary observations significantly reduce the available data since they only give the relationship of size between the system output and a constant (called the threshold), increasing uncertainty in parameter estimation. The subtle changes in continuous signals may be ignored in binary observations, resulting in information loss and increasing the bias and variance of FIR system parameter estimation, thereby reducing system identification accuracy. Second, tampering by attackers can disrupt the data patterns critical for accurate identification. Finally, privacy encryption mechanisms, while protecting data sensitivity by introducing random noise, may negatively affect estimation accuracy. To address these challenges, this paper applies a dual encryption approach based on differential privacy to both the system input and binary output. This strategy not only prevents attackers from inferring or tampering with the data but also, through improved estimation algorithms and convergence analysis, achieves reliable system parameter identification, balancing the dual objectives of privacy protection and identification accuracy.

    The paper proposes a dual differential privacy encryption strategy for system identification: First, Laplace noise is applied to the system input to prevent reverse-engineering of the system's structure or parameters. Then, binary observations are perturbed probabilistically to comply with differential privacy, protecting sensitive information at the output from being accessed or altered. An improved parameter estimation algorithm is introduced, with asymptotic analysis proving the strong consistency and convergence of the proposed method. Numerical simulations demonstrate that high identification accuracy can be maintained, even under various adversarial attack scenarios. This work offers a novel perspective and technical foundation for applying differential privacy in cyber-physical systems, addressing both data security and system identification needs.

    This paper makes the following contributions:

    ● A differential privacy encryption algorithm for FIR system inputs is proposed, enhancing data privacy and security.

    ● A differential privacy encryption strategy under binary observation conditions is investigated, presenting a discrete 0-1 sequence differential privacy encryption method.

    ● Parameter estimation results under differential privacy encryption are validated, demonstrating the effectiveness of the proposed method in ensuring both data security and accurate system identification.

    The rest of this paper is organized as follows: Section 2 introduces the FIR system model and discusses the data tampering threats under binary observations, formally stating the research problem. Section 3 details the dual differential privacy encryption methods for both input and output, along with the corresponding random noise models. Section 4 presents the parameter identification algorithm based on encrypted observation signals, accompanied by convergence analysis and strong consistency proofs. Section 5 explores the relationship between encryption budget and identification accuracy, presenting an optimization approach using Lagrange multipliers to determine the optimal encryption strength. Section 6 provides numerical simulations and compares results. Section 7 summarizes the work and discusses future research directions.

    The object of this study is a single-input single-output finite impulse response (FIR) system, and its model is given below:

    yk=a1uk+a2uk1++anukn+1+dk, (1)

    which can be simplified to the following form:

    yk=ϕTkθ+dk,k=1,2,, (2)

    where uk is the system input, ϕk=[uk,uk1,,ukn+1]T is the regression vector composed of the input signals, and θ=[a1,a2,,an]T is the system parameter to be estimated for the system. dk is independent and identically distributed Gaussian noise, with a mean of 0 and variance σ2.

    The output yk is measured by a binary sensor with a threshold C, and the resulting binary observation is

    s0k=I{ykC}={1,ykC,0,otherwise. (3)

    The binary observation model s0k=I{ykC} reflects a realistic scenario where only thresholded information is available from sensors. This situation arises frequently in practical FIR systems deployed in industrial monitoring or smart grid applications, where energy-efficient or low-cost sensors output only one-bit comparisons. The observation signal s0k is transmitted through the network to the estimation center, but during the transmission process, it may be subject to data tampering by malicious attackers.

    The signal sk received by the estimation center is different from the original signal s0k, and the relationship between them is given by:

    Pr(sk=0|s0k=1)=p,Pr(sk=1|s0k=0)=q. (4)

    We abbreviate the above data tampering attack strategy as (p,q). In this case, the estimation center estimates the system parameter θ based on the received signal sk [24,25].

    Remark 2.1 Specifically, p denotes the probability that an original output of 0 is maliciously flipped to 1, and q denotes the probability that an original output of 1 is flipped to 0. These parameters characterize a probabilistic tampering model in which binary-valued observations are subject to asymmetric flipping attacks.

    Remark 2.2 Binary quantization affects both noise characteristics and system identification accuracy: the additive noise becomes nonlinearly embedded in the binary output, and each observation carries limited information.

    Differential privacy is a powerful privacy protection mechanism, and its core idea is to add random noise to the query results, limiting the impact of a single data point on the output, thus ensuring that attackers cannot infer individual information by observing the output. This process not only ensures that the system's input and output data are not directly accessed by attackers, but also effectively prevents attackers from reverse-engineering sensitive system information through input data.

    The use of differential privacy in FIR also has its pros and cons. Using differential privacy-based encryption algorithms can limit the impact of data tampering attacks on the FIR system, but encryption itself involves adding noise, which can also lead to data tampering. Therefore, designing an encryption algorithm that meets both the accuracy requirements for parameter estimation and the need to limit data tampering attacks is the key focus of our research.

    Figure 1 presents the end-to-end workflow of the proposed privacy-preserving FIR system identification framework under potential data tampering attacks. The process begins with the input signal uk, which is passed through a Laplace mechanism Lap(Δu/ϵ) to produce a perturbed input uk, ensuring differential privacy at the input level. This perturbed input, together with the system's internal dynamics and additive noise dk, is processed by the FIR system to produce the output yk. The output is then passed through a binary-valued sensor that generates a thresholded observation s0k=I{ykC}. To preserve output-side privacy, s0k is further randomized via a randomized response mechanism, resulting in s0k. This privatized observation is then transmitted over a communication network that may be subject to data tampering. During transmission, an attacker may flip the value of s0k with certain probabilities, yielding the final corrupted observation sk, which is received by the estimation center. The estimation center then applies a robust identification algorithm to estimate the unknown system parameters based on the received data. This pipeline integrates input perturbation, output privatization, and robustness against data manipulation into a unified privacy-preserving system identification framework.

    Figure 1.  System configuration.

    This paper aims to address the following two questions:

    ● How can the harm caused by data tampering be mitigated or prevented through differential privacy algorithms?

    ● How can we design a parameter estimation algorithm, and what are the parameter estimation algorithm and the optimal differential privacy algorithm?

    In this section, we consider an adversarial setting where the collected input-output data may be subject to tampering by untrusted components or compromised observers. Unlike traditional cryptographic methods such as MACs or digital signatures, which aim to detect and reject tampered data, our approach leverages differential privacy as a proactive defense. By adding calibrated randomness to both input and output channels, the encrypted signals inherently reduce the effectiveness of tampered data and enhance the robustness of system identification against malicious interference.

    We first provide the standard definition of differential privacy. Let F be a randomized algorithm. For any two neighboring datasets x and x, their output results satisfy the following inequality:

    Pr(F(x)=S)Pr(F(x)=S)eε. (5)

    Here, S is a possible output of the algorithm, and ε is the privacy budget, which measures the degree of difference between the output distributions of the algorithm [1,23]. To ensure the privacy of the input data ui in the FIR system, we encrypt the input signal using the Laplace mechanism of differential privacy. The Laplace mechanism was selected for input encryption due to its strong ε-differential privacy guarantee, fitting the bounded input signals of FIR systems. The Gaussian mechanism, among others, offering weaker (ε,δ)-differential privacy or may not be suitable for FIR application, was considered unsuitable.

    By adding Laplace noise to ui, we have:

    ui=ui+Lap(Δuiε0), (6)

    where Lap(Δuiϵ0) denotes a random variable drawn from the Laplace distribution with mean 0 and scale parameter b=Δuiϵ0. The probability density function of the Laplace distribution is given by:

    Lap(xb)=12bexp(|x|b),xR, (7)

    where Δui is the sensitivity of the input ui, representing the maximum change in the system's input, and the privacy budget ε0 determines the magnitude of the noise. The sensitivity Δf is used to measure the maximum change of the query function f over neighboring datasets, defined as

    Δf=maxx,xf(x)f(x), (8)

    where x and x are neighboring datasets differing by only one row. From Eq (8), we can derive that

    Δui=maxuk,uk|ykyk|, (9)

    where yk and yk are the system outputs corresponding to the current input uk and its neighboring input uk, respectively, as described by Eq (1). Substituting this in, we can get

    Δui=maxuk,uk|n1i=0ai+1(ukiuki)|. (10)

    If the distance between the neighboring input sets uk and uk is a fixed value o (or o is the chosen mean), the sensitivity can be expressed as

    Δui=|n1i=0ai+1o|. (11)

    Therefore, from Eqs (6) and (11), the encrypted result after adding noise to ui is

    ui=ui+Lap(|n1i=0ai+1o|ε0). (12)

    After encrypting the input ui, the regression vector ϕk=[uk,uk1,,ukn+1]T formed by the input signals will also be affected by the Laplace noise. Let the noise vector be Lk=[l1,l2,,ln]T, where li=Lap(Δuiε0). The encrypted regression vector is

    ϕk=ϕk+Lk. (13)

    Substituting the encrypted ϕk into the system output expression Eq (2), we obtain the encrypted system output

    yk=(ϕk+Lk)Tθ+dk=ϕTkθ+LTkθ+dk. (14)

    This implies that the statistical properties of the system output change under the influence of Laplace noise. By binarizing the system output yk, we obtain the observed signal s0k. Assume we have a binary sensor with threshold C, which converts yk into the binarized observation s0k :

    s0k=I{ykC}={1,ykC,0,otherwise. (15)

    Based on Eqs (14) and (13), the center will estimate the system parameter θ based on the binarized signal s0k. By combining the tampered signal, the output s0k can be expressed as

    s0k=I{ϕTkθ+LTkθ+dkC}. (16)

    By introducing Laplace noise, we ensure the privacy of the system input ui. In the following sections, we will continue to discuss how to encrypt s0k under the differential privacy mechanism and estimate the system parameter θ under data tampering attacks.

    In the system, we define Pr(s0k=1)=λk, which represents the probability that the signal s0k equals 1 in each period n. To protect this probability information, we apply differential privacy encryption to s0k. Specifically, for each s0k, there is a probability γ of retaining the original value, and with probability 1γ, we perform a negation operation. This operation can be formalized as

    s0k={s0k,with probability γ,1s0k,with probability 1γ. (17)

    This encryption process introduces randomness and achieves the goal of differential privacy. This encryption method effectively ensures that, whether s0k is 0 or 1, an external attacker would find it difficult to infer the original value of s0k by observing s0k, thus protecting data privacy. We have

    Pr(s0k=1)=λkγ+(1λk)(1γ),Pr(s0k=0)=(1λk)γ+λk(1γ). (18)

    To analyze the encrypted s0k, we construct its likelihood function L to correct the perturbed encryption results. The likelihood function can be expressed as

    L=[λkγ+(1λk)(1γ)]m[(1λk)γ+λk(1γ)]nm, (19)

    where m=ni=1Xi represents the observed number of times s0k=1, and Xi is the value of the i -th encrypted signal s0k. By maximizing this likelihood function, we can estimate the maximum likelihood estimate for λk after encryption. The maximization process leads to the estimation formula

    ^λk=γ12γ1+12γ1ni=1Xin. (20)

    This estimate expresses the effect of the encrypted s0k on the true λk.

    To further analyze the unbiasedness of the estimate, we can analyze the mathematical expectation and prove that the estimate is unbiased. The specific derivation is as follows:

    E(^λk)=12γ1[γ1+1nni=1Xi]=λk. (21)

    Based on the unbiased estimate, we can obtain the expected number N of signals equal to 1 in each period n as

    N=^λk×n=γ12γ1n+12γ1ni=1Xi. (22)

    According to the definition of differential privacy, the relationship between the privacy parameter ε0 and γ is

    γeε1(1γ). (23)

    By further derivation, we can obtain the calculation formula for the privacy budget ε0 as

    ε1=ln(γ1γ). (24)

    Through the above encryption mechanism [12], we ensure that the system satisfies differential privacy while maintaining high robustness against interference from attackers on the observation of sk. During data transmission, the observed value s0k may be subjected to attacks, and the tampering probability of the observed value s0k is

    {Pr(sk=0|s0k=1)=p,Pr(sk=1|s0k=0)=q. (25)

    Assume that the system input {uk} is periodic with a period of n, i.e.,

    uk+n=uk. (26)

    Let π1=ϕT1,π2=ϕT2,,πn=ϕTn, and the cyclic matrix formed by uk is

    Φ=[πT1,πT2,,πTn]T. (27)

    After the preservation process applied to both inputs and outputs of uk and s0k, during data transmission, the observed value s0k will be attacked, and both active and passive attacks on the encrypted data will be limited and disturbed by noise. In the next section, we will provide a parameter estimation algorithm.

    Under external attacks, and in the sense of the Cramér-Rao lower bound, the optimal estimation algorithm for the encrypted parameter ˆθN is given below:

    ˆθN=Φ1[ξN,1,,ξN,n]T, (28)
    ξN,i=CF1(1LNLNl=1s(l1)n+i), (29)

    where Φ1 is the inverse matrix in Eq (27), C is the threshold in Eq (3), F1 is the inverse cumulative distribution function, and s(l1)n+i is the doubly encrypted s0k in each period. For a data length N, it is divided into LN=Nn, which is the integer part of N divided by the input period n, representing the number of data segments LN [26].

    Definition 4.1 An algorithm is said to have strong consistency if the estimated value ˆθN produced by the algorithm satisfies that as the sample size N, the estimate converges to the true value of the parameter θ (or the encrypted true value ˉθ) with probability 1, i.e., ˆθNθ or ˆθNˉθw.p.1 asN.

    This conclusion follows the analysis in [24], which establishes strong consistency of estimators under binary-valued observation and bounded noise conditions.

    Theorem 4.1 For the parameter estimation under the attack strategy (p,q) in Eq (28) with encrypted conditions, the expanded form of the true value ˉθ is

    ˆθNˉθ=Φ1[CF1((2γ1)(1pq)FZ1(CπT1θ)+(1p)(1γ)+qγ),,CF1((2γ1)(1pq)FZn(CπTnθ)+(1p)(1γ)+qγ)]T (30)

    and the algorithm has strong consistency. Here, θ represents the parameter vector to be estimated in the system, and C, γ, Φ1, and πTk are given in Eqs (15), (17), and (27), respectively. F1 is the inverse cumulative distribution function of noise dk, and FZk(z) is the cumulative distribution function of the combined noise from Laplace noise lTkθ and Gaussian noise dk.

    Proof: Under data tampering attacks, in the period N, each period's sk is independent and identically distributed. For k=(l1)n+i, where i=1,,n, the regression vector ϕk is periodic with period n, i.e., ϕk+n=ϕk. Define πi=ϕi, so ϕ(l1)n+i=πi. Since ϕ(l1)n+i=πi, and the noise terms Lk, dk are i.i.d., the sequence {s(l1)n+i}LNl=1 is i.i.d. for each i.

    By the strong law of large numbers, for each i :

    1LNLNl=1s(l1)n+iE[s(l1)n+i]w.p.1 asN. (31)

    Next, we compute E[s(l1)n+i]. Let k=(l1)n+i. The encrypted input is ϕk=ϕk+Lk=πi+Lk, and the system output is:

    yk=(πi+Lk)Tθ+dk=πTiθ+LTkθ+dk. (32)

    Define Zk=LTkθ+dk. The binary signal is:

    s0k=I{ykC}=I{πTiθ+ZkC}=I{ZkCπTiθ}. (33)

    Thus:

    λk=Pr(s0k=1)=Pr(ZkCπTiθ)=FZi(CπTiθ). (34)

    The output encryption gives s0k :

    E[s0k]=γλk+(1γ)(1λk)=(2γ1)λk+(1γ). (35)

    After the attack strategy (p,q) :

    E[sk]=(1p)E[s0k]+q(1E[s0k])=(1pq)E[s0k]+q. (36)

    Substitute E[sk] :

    E[sk]=(1pq)[(2γ1)FZi(CπTiθ)+(1γ)]+q. (37)

    Simplify:

    E[sk]=(2γ1)(1pq)FZi(CπTiθ)+(1p)(1γ)+qγ. (38)

    Define:

    ηi=(2γ1)(1pq)FZi(CπTiθ)+(1p)(1γ)+qγ. (39)

    Thus:

    1LNLNl=1s(l1)n+iηiw.p.1. (40)

    From the definition of ˆθN :

    ξN,i=CF1(1LNLNl=1s(l1)n+i)CF1(ηi)w.p.1. (41)

    Therefore:

    ˉθN=Φ1[ξN,1,,ξN,n]TΦ1[CF1(η1),,CF1(ηn)]T. (42)

    Note that the total observation noise comprises the superposition of the original Gaussian noise dk and the added Laplace noise due to differential privacy encryption. Since both noise sources are independent and have finite variance, their sum forms a new independent noise process with bounded second moments. This preserves the excitation condition needed for strong consistency. Here, the strong consistency follows from the strong law of large numbers. From Eqs (27)–(29) and (34), the expanded form of ˉθ is obtained, thus completing the proof.

    Our goal is to study how to maximize encryption within the permissible range of estimation errors.

    To clarify the impact of different encryptions on the results, we analyze them separately based on the independence of input and output noise.

    For the input noise encryption, the noise variance σ2L introduced by input encryption, which affects ˆθ through the system model, is as follows:

    D0=σ2L=2(1ε0)2ni=1(θiΔui)2. (43)

    For the output noise encryption, since sk is a binary random variable, its variance is

    Var(sk)=Pr(sk=1)(1Pr(sk=1))=(1γ+λk(2γ1))(γλk(2γ1)). (44)

    From Eq (20), we can derive the variance of the maximum likelihood estimate ˆλk as

    Var(ˆλk)=(1(2γ1)n)2nk=1Var(sk). (45)

    Substituting the result from Eq (44), we can get

    Var(ˆλk)=Var(sk)(2γ1)2n=(1γ+λk(2γ1))(γλk(2γ1))(2γ1)2n. (46)

    The parameter estimate ˆθ depends on λk, and D1 is defined as

    D1=Var(ˆλk)=(1γ+λk(2γ1))(γλk(2γ1))(2γ1)2n. (47)

    To solve the optimal encryption problem, we model the optimization of the optimal encryption strategy:

    maxε0,ε1(D0+D1)s.t. ¯θθϵ, (48)

    where ε0 and ε1 are the differential privacy budgets for input and output, respectively, and D0 and D1 are the functions that measure the impact of input and output encryption noise on the parameter estimate ˉθ, where D0=2(1ε0)2ni=1(θiΔui)2 and D1=Var(sk)(2γ1)2n. ˉθθ is the parameter estimation error, which must be less than or equal to the given threshold ϵ, and both ˉθ and θ must be greater than zero.

    Theorem 5.1 In the case of differential privacy with the preservation process applied to both inputs and outputs, σ2Z is monotonically decreasing with respect to the privacy parameter of input ε0, which indicates the smaller ε0 is, the stronger the privacy protection provides, but the worse the estimation accuracy is.

    Proof: LTkθ is a linear combination of Laplace noise, and dk is Gaussian noise, while Zk is the sum of these two. Then, the variance of each li is

    Var(li)=2(Δuiε0)2. (49)

    Thus,

    Var(LTkθ)=ni=1θ2iVar(li)=2(1ε0)2ni=1(θiΔui)2. (50)

    The total variance of Zk is

    σ2Z=Var(Zk)=Var(LTkθ)+Var(dk)=σ2L+σ2d, (51)

    where

    σ2L=2(1ε0)2ni=1(θiΔui)2. (52)

    To rigorously prove monotonicity, we can verify it by analyzing the derivative of σ2Z with respect to ε0. Taking the derivative of σ2Z with respect to ε0,

    dσ2Zdε0=ddε0(2(1ε0)2ni=1(θiΔui)2+σ2d). (53)

    Since σ2d is a constant and does not depend on ε0, we get

    dσ2Zdε0=41ε30ni=1(θiΔui)2. (54)

    Since ε0>0 and ni=1(θiΔui)2>0, we have dσ2Zdε0<0. This implies that as ε0 increases, σ2Z decreases monotonically. According to system estimation theory, as the noise variance decreases, the estimation accuracy improves. Therefore, as ε0 increases, σ2Z decreases, making the estimated result ˆθ closer to the true value θ. This concludes the proof.

    Theorem 5.2 In the case of differential privacy with the preservation process applied to both inputs and outputs, ˉθ is monotonically decreasing with respect to ε1. The smaller ε1 is, the stronger the privacy protection provides.

    Proof: From Eq (24), we have γ=eε1eε1+1 and 2γ1=eε11eε1+1.

    Substituting into Eq (30), we can express ˉθ as

    ˉθ=Φ1[CF1(eε11eε1+1FZ1(CπT1θ)+1FZ1(CπT1θ)eε1eε1+1),,CF1(eε11eε1+1FZn(CπTnθ)+1FZn(CπTnθ)eε1eε1+1)]T. (55)

    By simplifying each term, we obtain

    (eε11)FZi(CπTiθ)+(eε1+1)(1FZi(CπTiθ))eε1eε1+1. (56)

    Simplifying the numerator, it can be inferred that

    (eε11)FZi(CπTiθ)+(eε1+1)(1FZi(CπTiθ))eε1=eε1FZi(CπTiθ)FZi(CπTiθ)+eε1+1eε1FZi(CπTiθ)FZi(CπTiθ)eε1=2FZi(CπTiθ)+1. (57)

    Substituting each term with 2FZi(CπTiθ)+1eε1+1, the expression for ˉθ becomes

    ˉθ=Φ1[CF1(2FZ1(CπT1θ)+1eε1+1),,CF1(2FZn(CπTnθ)+1eε1+1)]T. (58)

    This concludes the proof.

    In parameter estimation, the estimation error is typically related to the variance of the observation noise. Differential privacy encryption for both input and output introduces noise that affects the bias and variance of parameter estimation.

    The expected value of the parameter estimate ˆθ is ˉθ, i.e.,

    E[ˆθ]=ˉθ.

    The deviation between ˉθ and the true parameter θ is caused by the encryption noise. Our goal is to solve for ˉθθ, and express it as a function of ε0 and ε1.

    The mean square error (MSE) of the parameter estimate can be expressed as the square of the bias plus the variance:

    MSE(ˆθ)=E[ˆθ]θ2+Var(ˆθ). (59)

    Since we are concerned with the bias of the estimate ˉθθ, and the variance of the estimate also depends on D0 and D1, we need to express both the bias and variance as functions of ε0 and ε1.

    To accurately express ˉθθ, we start from the root mean square error (RMSE):

    RMSE(ˆθ)=trace(Var(ˆθ)). (60)

    The total variance of the system's parameter estimate is

    Var(ˆθ)=(σ2d+D0+D1)(ΦTΦ)1, (61)

    where σ2d is the system's inherent noise variance, D0 and D1 are the variances introduced by input and output encryption noise, and Φ is the matrix from Eq (27).

    Thus, the root mean square error of the estimate can be expressed as

    RMSE(ˆθ)=trace((σ2d+D0+D1)(ΦTΦ)1). (62)

    Since ΦTΦ is a known design matrix, we can represent the estimation error as

    ˉθθkσ2d+D0+D1, (63)

    where k=trace((ΦTΦ)1).

    Based on the above derivation, the optimization problem can be reformulated as:

    maxε0,ε1D0+D1s.t.ˉθθϵσ2d+D0+D1ϵ2eff,

    where ϵ2eff=ϵ2/k2σ2d.

    Now, we construct the Lagrangian objective function

    L(ε0,ε1,λ)=D0+D1λ(D0+D1ϵ2eff). (64)

    For ε0, we have

    D0=2(1ε0)2ni=1(θiΔui)2. (65)

    Take partial derivatives with respect to ε0, and we have

    D0ε0=4(1ε30)ni=1(θiΔui)2. (66)

    For ε1, we have

    D1=Var(sk)(2γ1)2n, (67)

    where γ=eε1eε1+1, and we can compute

    γε1=eε1(eε1+1)2. (68)

    Now, further differentiate D1. First, rewrite D1 as

    D1=Var(s0k)(2γ1)2n. (69)

    Differentiate (2γ1)2 with respect to ε1, and it can be obtained that

    ε1[(2γ1)2]=4(2γ1)eε1(eε1+1)2. (70)

    Now, applying the chain rule, we have

    D1ε1=8Var(s0k)(2γ1)eε1n(eε1+1)4. (71)

    By the above analysis, the optimal values for ε0 and ε1 need to satisfy the following constraint:

    D0+D1=ϵ2eff. (72)

    Taking partial derivatives with respect to ε0 and ε1, we get

    Lε0=D0ε0λε0(D0+D1)=0, (73)
    Lε1=D1ε1λε1(D0+D1)=0. (74)

    Substituting the previously computed derivatives, we have

    4(1ε30)ni=1(θiΔui)2λ0=0, (75)
    8Var(s0k)n(2γ1)eε1(eε1+1)4λ0=0. (76)

    For ε0, since ε0(D0+D1)=0, the derivative is completely determined by the objective function. To maximize the objective function under the constraint, we first check if the condition holds. Then, we can solve for ε0 to satisfy the given constraint.

    For ε1, we also solve for λ to obtain the optimal solution.

    The optimal solutions for ε0 and ε1 are

    ε0=21+ϵ2eff, (77)
    ε1=23(1+ϵ2eff). (78)

    Remark 5.1 These solutions are confirmed based on the value of λ, and the specific numerical solution can be obtained through numerical computation.

    Consider the following system:

    {yk=a1uk+a2uk1+dk,sk=I{ykC},k=1,,N, (79)

    where a1=10 and a2=5 are the unknown parameters of the system, dk is N(0,100), and the threshold C=7. The sample length for this simulation is set to N=5000. The system input signals uk are set as cyclic signals, with a value of [3,5] in one cycle. The threshold ϵ is set to 12. The sensitivity Δu is computed based on the maximum possible deviation between two neighboring input sequences, resulting in o=|53|=2.

    We encrypt the input signal uk with Laplace noise under three different privacy budget parameters and simulate to verify its effects under different encryption strengths. According to the Laplace noise mechanism, the added noise follows the distribution:

    uk=uk+Lap(Δuε), (80)

    where uk is the encrypted input signal, Δu represents the sensitivity of the input signal, and ε is the privacy budget parameter.

    Figure 2 shows the uk signal under three different privacy budget parameters. The red line represents the strong encryption effect with a privacy budget of 0.1, the green line represents the moderate encryption effect with a privacy budget of 0.5, and the blue line represents the weak encryption effect with a privacy budget of 1.0. From the figure, it can be seen that as the privacy budget increases, the encryption strength decreases, and the amplitude of the noise significantly reduces. This result verifies the relationship between the strength of the Laplace noise and the privacy budget parameter.

    Figure 2.  The uk signal under different privacy budgets.

    From Figure 2, it can be observed that when the privacy budget parameter is small, i.e., under strong encryption, the noise amplitude is the largest, and the impact on the original signal uk is most significant. On the other hand, under weak encryption, the noise amplitude is the smallest, and the impact on the original signal is minimal. The results of Laplace noise encryption at different strengths indicate that the stronger the encryption, the larger the noise amplitude, the more significant the interference on the original signal uk, and the better the privacy protection; the weaker the encryption, the smaller the noise amplitude, the less interference on the original signal, and the weaker the privacy protection.

    The system output's binarized observation signal sk has been encrypted with different privacy budget parameters. The encryption process introduces Laplace noise with varying strengths on the binarized sk to protect privacy. After encryption, each sk retains its original value with probability γ and is flipped with probability 1γ. This process is expressed as

    sk={sk,with probability γ,1sk,with probability 1γ. (81)

    Figure 3 shows the binarized encrypted signal sk generated under three different privacy budget parameters (ε=0.1, ε=0.5, ε=1.0), with the red, green, and blue lines corresponding to strong, moderate, and weak encryption effects, respectively.

    Figure 3.  Simulation of sk after noise is added following binarization.

    From Figure 3, it is evident that the encryption strength significantly affects the degree of randomization of sk. As the privacy budget parameter ε decreases (i.e., stronger encryption), the probability of flipping sk increases, leading to higher randomization of sk. Conversely, when the privacy budget increases (i.e., weaker encryption), sk closely approximates the original binarized signal sk. This result verifies the relationship between encryption strength and signal randomization, namely, the stronger the encryption, the higher the randomization of sk; the weaker the encryption, the more sk retains the pattern of the original signal.

    Equation (30) provides the full expression for parameter estimation ˆθ under encryption, and we observe the accuracy of the parameter estimation in the simulation.

    The selected privacy budget values are ε0=1.0 and ε1=1.0. The simulation results are shown in Figure 4, where, as the sample size increases, the parameter estimation approaches the encrypted value ˉθ.

    Figure 4.  Simulation results: Parameter estimation error and deviation from the true value under different privacy budgets.

    We evaluate the impact of the privacy budget ε0 encryption on the input uk through simulations. Figure 5 shows the encryption effects under different privacy budgets. The error impact is represented by ˆθθ. The red line represents strong encryption with a privacy budget of 0.1, the green line represents moderate encryption with a privacy budget of 0.5, and the blue line represents weak encryption with a privacy budget of 1.0. From the error results, the simulation graph confirms the monotonicity Theorem 5.1. The smaller the privacy budget, the stronger the encryption, the larger the noise amplitude, and the greater the error between the estimated value and the true value. Conversely, when the privacy budget increases, the noise amplitude decreases, and the estimation error becomes smaller.

    Figure 5.  Encryption effects of the input signal uk under different privacy budgets.

    Next, we simulate the effect of adding differential privacy noise to the system output sk and observe the impact of different privacy budget parameters ε1 on the parameter estimation error. The simulation results are shown in Figure 6, which display the changes in parameter estimation error as the sample size varies for ε1=0.1, ε1=0.5, and ε1=1.0. The result shown in Figure 6 is consistent with Theorem 5.2.

    Figure 6.  Convergence curve of parameter estimation error under different privacy budgets ε1.

    In this section, we further investigate how the magnitude of input signals affects the identification accuracy of FIR systems under dual differential privacy constraints. Specifically, both the input vector \boldsymbol{u}_k = [u_k, u_{k-1}]^\top and the binary-valued output s_k' are simultaneously perturbed by independent Laplace noise satisfying \epsilon -differential privacy, denoted as \epsilon_0 and \epsilon_1 , respectively.

    To evaluate the estimation behavior under different input scales, we simulate two configurations: one using low-magnitude inputs (u_k, u_{k-1}) = (3, 5) , and another using higher-magnitude inputs (30, 50) , while keeping all other conditions identical (privacy budgets \epsilon_0 = \epsilon_1 = 0.5 , threshold C = 70 , and Gaussian noise variance \sigma^2 = 100 ).

    The simulation results (Figure 7) reveal a significant contrast. While the lower magnitude input case results in relatively fluctuating estimation error due to stronger sensitivity to both noise and binarization, the higher input magnitude quickly leads to a nearly constant estimation error curve. This phenomenon is attributed to the high values of y_k = a_1 u_k + a_2 u_{k-1} + d_k being mostly above the threshold C , rendering the binary outputs nearly constant and less informative for learning.

    Figure 7.  Estimation error comparison under dual differential privacy: (3, 5) vs. (30, 50) input.

    This experiment highlights the trade-off between input energy and privacy-resilient identifiability in FIR systems. It emphasizes the importance of designing input signals that balance observability with privacy-preserving distortion in binary measurement contexts.

    We apply two noise distributions to the binarized output s_k' :

    1) Gaussian noise: Zero mean, variance V = 1.0 , standard deviation \sigma = \sqrt{V} = 1.0 .

    2) Laplacian noise: Zero mean, variance V = 1.0 , scale parameter b = \sqrt{V / 2} \approx 0.707 .

    Both distributions maintain equal variance for a fair comparison. We simulate N = 5000 samples, adding Gaussian and Laplacian noise to s_k' and estimating parameters iteratively. The exponential moving average (EMA) of the estimation error is computed for both noise types.

    Figure 8 presents the EMA estimation error for Gaussian and Laplacian noise over 5000 samples.

    Figure 8.  EMA estimation error for Gaussian (blue) and Laplacian (red) noise with equal variance ( V = 1.0 , \epsilon = 0.5 ).

    The results, shown in Figure 8, demonstrate that Laplacian noise yields higher estimation errors due to its heavier-tailed distribution, indicating stronger privacy protection at the cost of accuracy, thus validating the privacy-accuracy trade-off of our approach.

    To investigate the impact of different privacy budget parameters \varepsilon_0 and \varepsilon_1 on the system parameter estimation, we adjust \varepsilon_0 and \varepsilon_1 to change the encryption strengths of the input and output, which affects both the estimation error and the degree of privacy protection in the system, thus helping to find the optimal solution.

    Using the chain rule, we compute the partial derivatives of \varepsilon_0 and \varepsilon_1 , i.e., \frac{\partial D_0}{\partial \varepsilon_0} = -\frac{2000}{\varepsilon_0^3} and \frac{\partial D_1}{\partial \varepsilon_1} = -\frac{8 \operatorname{Var}(s_k') (2\gamma - 1)e^{\varepsilon_1}}{n(e^{\varepsilon_1} + 1)^4} . Through numerical methods, we solve for the optimal solution, and under the estimation error constraint \epsilon = 12 , we calculate \epsilon_{\text{eff}}^2 = \frac{\epsilon^2}{k^2} - \sigma_d^2 , yielding \epsilon_{\text{eff}}^2 \approx 4777 . By setting the range for input encryption strength \varepsilon_0 , combined with the optimization target and constraints, the optimal solution is found to be \varepsilon_0 \approx 4.3 and \varepsilon_1 \approx 0.07 , with corresponding values of D_0 \approx 4598.9 and D_1 \approx 51.06 .

    We selected three different privacy budget parameter sets for comparison, as follows:

    ● Optimal solution: \varepsilon_0 = 4.3 , \varepsilon_1 = 0.07 .

    ● Scheme 1: \varepsilon_0 = 4.28 , \varepsilon_1 = 0.2 .

    ● Scheme 2: \varepsilon_0 = 5.0 , \varepsilon_1 = 0.06 .

    For each parameter set, we calculated the corresponding D_0 , D_1 , and D_0 + D_1 , and compared the parameter estimation errors.

    The calculation results for the three parameter sets are summarized in the table below:

    From Table 1, we can see that the optimal solution set yields the best overall encryption strength D_0 + D_1 . This also corroborates that the optimal solution reaches the extremum, and to get even closer to the extremum, higher precision would be required. Moreover, this optimal solution does not focus on the trade-off between input and output, and if there are more specific requirements, additional constraints should be included in the solution process. If higher privacy protection strength is needed, the \epsilon limit can be relaxed to balance the encryption strengths for both input and output.

    Table 1.  Values of D_0 , D_1 , and D_0 + D_1 under different privacy budgets.
    Privacy budget parameters D_0 D_1 D_0 + D_1
    Optimal solution ( \varepsilon_0 = 4.3 , \varepsilon_1 = 0.07 ) 4598.9 51.06 4649.96
    Scheme 1 ( \varepsilon_0 = 4.28 , \varepsilon_1 = 0.2 ) 4643.3 6.22 4649.52
    Scheme 2 ( \varepsilon_0 = 5.0 , \varepsilon_1 = 0.06 ) 3400 69.47 3469.47

     | Show Table
    DownLoad: CSV

    This paper investigates the application of differential privacy encryption to reduce the risk of data tampering in FIR system identification under binary observation conditions. Two different differential privacy algorithms are proposed to ensure data security and privacy. The experimental evaluation confirms that the proposed method not only effectively protects sensitive information, but also maintains the accuracy of parameter estimation. These findings validate the effectiveness of the proposed scheme in this paper. Future work may explore optimizing the trade-off between privacy protection and estimation accuracy, as well as extending the approach to more complex system models and real-world applications.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest in this paper.



    [1] Kircher M (2022) The bioeconomy needs economic, ecological and social sustainability. AIMS Environ Sci 9: 33–50. https://doi.org/10.3934/environsci.2022003 doi: 10.3934/environsci.2022003
    [2] Okafor C, Ajaero C, Madu C, et al. (2020) Implementation of circular economy principles in management of end-of-life tyres in a developing country (Nigeria). AIMS Environ Sci 7: 406–433. https://doi.org/10.3934/environsci.2020027 doi: 10.3934/environsci.2020027
    [3] Silvestri L, Forcina A, Di Bona G, et al. (2021) Circular economy strategy of reusing olive mill wastewater in the ceramic industry: How the plant location can benefit environmental and economic performance. J Clean Prod 326: 129388. https://doi.org/10.1016/j.jclepro.2021.129388 doi: 10.1016/j.jclepro.2021.129388
    [4] Gusmerotti NM, Testa F, Corsini F, et al. (2019) Drivers and approaches to the circular economy in manufacturing firms. J Clean Prod 230: 314–327. https://doi.org/10.1016/j.jclepro.2019.05.044 doi: 10.1016/j.jclepro.2019.05.044
    [5] Testa F, Iovino R, Iraldo F (2020) The circular economy and consumer behaviour: the mediating role of information seeking in buying circular packaging. Bus Strategy Environ https://doi.org/10.1002/bse.2587 doi: 10.1002/bse.2587
    [6] Geissdoerfer M, Savaget P, Bocken NMP, et al. (2017) The circular economy – a new sustainability paradigm? J Clean Prod 143: 757–768. https://doi.org/10.1016/j.jclepro.2016.12.048 doi: 10.1016/j.jclepro.2016.12.048
    [7] Silvestri C, Silvestri L, Forcina A, et. al. (2021) Green chemistry contribution towards more equitable global sustainability and greater circular economy: A systematic literature review. J Clean Prod 294: 126137. https://doi.org/10.1016/j.jclepro.2021.126137 doi: 10.1016/j.jclepro.2021.126137
    [8] Hickel J, Kallis G (2019) Is green growth possible? New Politic Econ 25: 469–486. https://doi.org/10.1080/13563467.2019.1598964 doi: 10.1080/13563467.2019.1598964
    [9] Geissdoerfer M, Morioka S N, De Carvalho M M, et al. (2018) Business models and supply chains for the circular economy. J Clean Prod 190: 712–721. https://doi.org/10.1016/j.jclepro.2018.04.159 doi: 10.1016/j.jclepro.2018.04.159
    [10] Sgroi F (2021). Food products, gastronomy and religious tourism: The resilience of food landscapes. Int J Gastrono Food Sci 26: 100435. https://doi.org/10.1016/j.ijgfs.2021.100435 doi: 10.1016/j.ijgfs.2021.100435
    [11] Smeets B, Schellekens G, Bauwens T, et al. (2021). What's the Damage? Monetizing the Environmental Externalities of the Dutch Economy and Its Supply Chain (De Nederlandsche Bank Working Paper No. 79). De Nederlandsche Bank, Amsterdam.
    [12] Osterwalder A, Pigneur Y, (2010) Business Model generation. John Wiley & Sons.
    [13] Schneider F, Kallis G, Martinez-Alier J (2010). Crisis or opportunity? Economic degrowth for social equity and ecological sustainability. Introduction to this special issue. J Clean Prod Growth 18: 511–518. https://doi.org/10.1016/j.jclepro.2010.01.014 doi: 10.1016/j.jclepro.2010.01.014
    [14] Khmara Y, Kronenberg J (2018) Degrowth in business: an oxymoron or a viable business model for sustainability? J Clean Prod 177: 721–731. https://doi.org/10.1016/j.jclepro.2017.12.182 doi: 10.1016/j.jclepro.2017.12.182
    [15] Nesterova I (2020) Degrowth business framework: implications for sustainable development. J Clean Prod 262: 121382. https://doi.org/10.1016/j.jclepro.2020.121382 doi: 10.1016/j.jclepro.2020.121382
    [16] Bauwens T, Gotchev B, Holstenkamp L (2016) What drives the development of community energy in Europe? The case of wind power cooperatives. Energy Res Soc Sci 13: 136–147. https://doi.org/10.1016/j.erss.2015.12.016 doi: 10.1016/j.erss.2015.12.016
    [17] Bauwens T, Hekkert M, Kirchherr J (2020) Circular futures: What Will They Look Like? Ecol Econ 175: 1–14. https://doi.org/10.1016/j.ecolecon.2020.106703 doi: 10.1016/j.ecolecon.2020.106703
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3119) PDF downloads(223) Cited by(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog