Loading [MathJax]/jax/output/SVG/jax.js
Research article

Distributed data-driven iterative learning control for multi-agent systems with unknown input-output coupled parameters

  • Received: 17 October 2024 Revised: 11 January 2025 Accepted: 22 January 2025 Published: 14 February 2025
  • This article studies a distributed data-driven iterative learning control (ILC) strategy based on the identified input–output coupled parameters (IOCPs) to address the consensus trajectory tracking problem of discrete time-varying multi-agent systems (MASs). First, by leveraging the repeatability of the control system, a special learning scheme is designed by using system input and output data to identify the unknown IOCPs. Then the reciprocal of the identified IOCPs is selected as the learning gain to construct the ILC law of the MASs. Second, the case of measurement noise in the MASs is considered, where the maximum allowable control deviation is incorporated into the learning mechanism for identification of the IOCPs, thereby minimizing adverse effects of the noise on the learning scheme's performance and bolstering robustness. Finally, three numerical simulations are employed to validate the effectiveness of the designed IOCP identification method and iterative learning control strategy.

    Citation: Duhui Chang, Yan Geng. Distributed data-driven iterative learning control for multi-agent systems with unknown input-output coupled parameters[J]. Electronic Research Archive, 2025, 33(2): 867-889. doi: 10.3934/era.2025039

    Related Papers:

    [1] Saeedreza Tofighi, Farshad Merrikh-Bayat, Farhad Bayat . Designing and tuning MIMO feedforward controllers using iterated LMI restriction. Electronic Research Archive, 2022, 30(7): 2465-2486. doi: 10.3934/era.2022126
    [2] Kun Han, Feng Jiang, Haiqi Zhu, Mengxuan Shao, Ruyu Yan . Learning cooperative strategies in StarCraft through role-based monotonic value function factorization. Electronic Research Archive, 2024, 32(2): 779-798. doi: 10.3934/era.2024037
    [3] Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224
    [4] Hebing Zhang, Xiaojing Zheng . Multi-Local-Worlds economic and management complex adaptive system with agent behavior and local configuration. Electronic Research Archive, 2024, 32(4): 2824-2847. doi: 10.3934/era.2024128
    [5] Jinjun Yong, Xianbing Luo, Shuyu Sun . Deep multi-input and multi-output operator networks method for optimal control of PDEs. Electronic Research Archive, 2024, 32(7): 4291-4320. doi: 10.3934/era.2024193
    [6] Xiangwen Yin . Promoting peer learning in education: Exploring continuous action iterated dilemma and team leader rotation mechanism in peer-led instruction. Electronic Research Archive, 2023, 31(11): 6552-6563. doi: 10.3934/era.2023331
    [7] Mengjie Xu, Nuerken Saireke, Jimin Wang . Privacy-preserving distributed optimization algorithm for directed networks via state decomposition and external input. Electronic Research Archive, 2025, 33(3): 1429-1445. doi: 10.3934/era.2025067
    [8] Sida Lin, Jinlong Yuan, Zichao Liu, Tao Zhou, An Li, Chuanye Gu, Kuikui Gao, Jun Xie . Distributionally robust parameter estimation for nonlinear fed-batch switched time-delay system with moment constraints of uncertain measured output data. Electronic Research Archive, 2024, 32(10): 5889-5913. doi: 10.3934/era.2024272
    [9] Longchao Da, Hua Wei . CrowdGAIL: A spatiotemporal aware method for agent navigation. Electronic Research Archive, 2023, 31(2): 1134-1146. doi: 10.3934/era.2023057
    [10] Qianpeng Xiao, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu . CCkEL: Compensation-based correlated k-labelsets for classifying imbalanced multi-label data. Electronic Research Archive, 2024, 32(5): 3038-3058. doi: 10.3934/era.2024139
  • This article studies a distributed data-driven iterative learning control (ILC) strategy based on the identified input–output coupled parameters (IOCPs) to address the consensus trajectory tracking problem of discrete time-varying multi-agent systems (MASs). First, by leveraging the repeatability of the control system, a special learning scheme is designed by using system input and output data to identify the unknown IOCPs. Then the reciprocal of the identified IOCPs is selected as the learning gain to construct the ILC law of the MASs. Second, the case of measurement noise in the MASs is considered, where the maximum allowable control deviation is incorporated into the learning mechanism for identification of the IOCPs, thereby minimizing adverse effects of the noise on the learning scheme's performance and bolstering robustness. Finally, three numerical simulations are employed to validate the effectiveness of the designed IOCP identification method and iterative learning control strategy.



    With the development and advancement of control technology, iterative learning control (ILC) was first proposed as an intelligent learning mechanism in [1] to enhance the efficiency of systems. The core idea of ILC is that the previously collected input–output data from the repeatedly operated dynamic system are utilized to produce the control inputs for the subsequent operation. This process aims to obtain the subsequent control input of the system, ultimately achieving precise tracking of the desired trajectory. Due to the simple and effective structure, ILC has been successfully applied to autonomous vehicles [2], robot manipulators [3], mechatronic systems [4], computer numerical control machine tool [5], multi-agent systems (MASs) [6], and other fields.

    On the other hand, the level of automation in the production process has increased and the cooperative control of MASs has garnered significant attention and interest. These systems have the advantages of autonomy, distribution, coordination, and learning ability, and they have been widely used in the fields of multi-robot systems [7], power-supply systems [8], traffic control [9], distributed coordination [10], smart grids [11], etc. These papers primarily concentrate on how to devise the appropriate control program to attain consensus control within the system.

    Since the ILC method was first introduced into multi-agent formation in [12], the advantages of using the distributed ILC method to address the consensus tracking issue in MASs have been analyzed. MASs improve work efficiency through communication and coordination between agents, constantly adjusting and updating the behavior, enabling all agents to converge or achieve consistent tracking of the desired trajectory within a specified timeframe. The application of ILC to MASs promoted research into the consensus tracking problem. Subsequently, many researchers have studied the cooperative control problem of repetitive systems. Meng and Jia [13] researched the application of a P-type ILC to MASs, and Dai et al. [14] studied the P-type ILC to solve the time-delay problem of MASs. In a few words, the ILC method has shown good performance in previous MAS control. Furthermore, several multi-agent control approaches rooted in ILC have proven effective in practical applications [15].

    In particular, the improvement of actual production requirements has led to upgrading of the system's complexity, and the complexity of the system has also escalated. Numerous problems such as unknown models, unknown or uncertain system parameters, measurement noise, and other problems have affected the development and application of MASs. For the nonlinear system, Hou and Jin [16] proposed a model-free adaptive iterative learning control method to control the system to complete the tracking task. This method was introduced into the MASs in Bu et al. [17] to solve the trajectory tracking problem, and a system with iteration-varying topologies was considered. For a linear system with unknown parameters, Liu and Ruan [18] constructed an adaptive parameter estimation algorithm by using the input and output data of the system in the form of the Markov matrix. In a similar way, Geng et al. [19] transformed a class of multi-phase batch processes into a switched linear system, and then used the strategy of ILC to control the system. Furthermore, Lin et al. [20] proposed a point-to-point iterative learning control strategy that addresses the optimal consensus problem at specified data points for heterogeneous networked agents with iteration-switching topologies. Hence, how to design an appropriate ILC scheme for the MASs with unknown parameters is a challenging problem.

    Within the conventional framework of ILC, the identified input–output coupled parameters (IOCPs) are defined as the resulting product of the output and input matrices pertaining to a single-input–single-output (SISO) system, serving as the only system information for ILC construction. Zhang et al. [21] designed an iterative learning mechanism that exploits the inherent repeatability of the control system to estimate the unknown IOCPs accurately. This mechanism involves running the repetitive system twice under controlled conditions. By comparing the system's outputs during these two runs, the researchers can infer the IOCP information, which is then used to design an effective ILC algorithm. This approach allows for the implementation of ILC in systems where the IOCP is not known beforehand, thereby enhancing the applicability and flexibility of ILC techniques. On the basis of the previous research, Liu et al. [22] formulated an iterative learning approach for determining an unknown IOCP by utilizing the repeatability of the control system and input–output data. Additionally, they introduced a refinement of this scheme to enhance its robustness against various disturbances, such as measurement noise, system noise, and initial state variations. This enhancement incorporates a maximum allowable control deviation to ensure that the learning process remains stable and robust in practical applications. Most of the existing work prioritizes the study of uncertain systems for parameter identification. Inspired by this, we considered the appropriate iterative learning mechanism to identify the IOCPs and then establish the ILC algorithm by designing an appropriate learning gain to ensure that the system can accurately track the desired trajectory.

    For the MASs, distributed ILC achieves control of MASs where only a subset of agents has direct access to the desired trajectory and only neighboring agents can communicate. Hock and Schoellig [23] focused on enabling a team of quadrotors to track a desired trajectory within a given formation. They presented a distributed ILC approach and proved the stability of any causal learning function when its gains meet a scalar condition. Pakshin et al. [24] designed an ILC law for the MASs with random perturbations, based on minimizing the deviations from a reference model and also based on the theory of stochastic stability of repetitive processes. Inspired by this, we were able to devise a distributed data-driven ILC strategy with the aim of effectively addressing MASs. This approach utilizes data analytics and advanced control algorithms to ensure the MASs achieve optimal performance in complex and dynamic conditions.

    The primary focus of this article is as follows. At first, a distributed data-driven iterative learning scheme is designed, based on input and output signals to identify the IOCPs for time-varying discrete MASs with unknown parameters. To tackle the consensus trajectory tracking problem, we design an ILC by selecting the reciprocal of the IOCPs as the learning gain. Second, we analyze MASs with random measurement noise by introducing a maximum allowable control deviation. The approach mitigates the adverse effects of noise on the iterative learning strategy's performance and enhances the robustness of the mechanism.

    The primary achievements of this paper can be summarized as follows:

    1) The discrete linear MASs are transformed into the lower triangular parameter matrix form; this simplifies the system's structure for further analysis. An N-step iterative learning mechanism is devised, leveraging repetitive system traits and input–output data to identify the unknown IOCPs. This data-driven approach suits complex systems with uncertainties and improves the accuracy of identifying the parameters.

    2) Considering the measurement noise prevalent in industrial applications, a strategy is introduced by setting the maximum allowable control deviation. This minimizes noise's adverse impact on the learning scheme's performance to bolster the system's robustness under noisy conditions.

    3) The reciprocal of the identified IOCPs is selected as the learning gain. The proposed ILC is constructed, based on the input–output data of the system, and also features a simple structure and strong applicability, thus enhancing the system's practicability and robustness. Rigorous mathematical proofs demonstrate that the designed distributed data-driven ILC strategy is capable of achieving consistent trajectory tracking.

    The organization of the remainder of this paper is as follows. Section 2 introduces the MASs and graph theory. Section 3 describes the IOCP identification mechanism and ILC for MASs. Section 4 discusses the case of measurement noise in the system. The simulation and conclusion of this paper are given in Sections 5 and 6, respectively.

    The communication topology of MASs can be represented and described as a graph ζ=(v,ϵ,A) with the nodes v={v1,v2vM} and the edges ϵv×v. The matrix A is the adjacency matrix of the graph ζ; its main diagonal entries are all 0. When agent i is connected to agent j, aij0, or else aij=0. Note that D is the degree matrix of graph ζ, D=diag(d1,d2,,dM), where di=jMiaji (the jMi represents the set of agents associated with agent i. The Laplacian matrix of graph ζ is L=DA.

    The virtual leader is denoted as Agent 0. The graph ¯ζ contains M agents and one virtual leader. In the graph ¯ζ, if the agent i and the leader 0 are connected, si0; otherwise, si=0. Here, S=diag(s1,s2,,sM) denotes the degree matrix of graph ¯ζ. We write P=L+S.

    Assumption 1: In this paper, the graph ζ is considered to be connected, and S has at least one nonzero diagonal element term.

    Remark 1. In a connected graph ζ, any two agents are linked by a path. The matrix S must possess at least one nonzero diagonal element, implying that there must be at least one agent capable of receiving the target trajectory signal from the leader.

    The linear discrete time-varying SISO MAS with M agents is described as

    {xj,k(t+1)=Aj(t)xj,k(t)+bj(t)uj,k(t),tT,yj,k(t)=cj(t)xj,k(t),tT+, (2.1)

    where T={0,1,,N1}, T+={1,2,,N}, N denotes the total sampling number, j=1,2,,M is the label number of the agents, t and k represent time and the iteration number, xj,k(t)Rn is the system's state, uj,k(t)R is the control input, and yj,k(t)R denotes the system output of the agent j. Aj(t)Rn×n, bj(t)Rn×1 and cj(t)R1×n are the unknown constant matrix, column vector, and row vector, respectively. Given a desired trajectory yd(t),tT+, the distributed data-driven ILC scheme is constructed under the following two basic assumptions.

    Assumption 2: For the MAS (2.1), a given desired trajectory yd(t),tT+ is realizable. For any specified desired state xj,d(t),tT+, there is always a particular control input uj,d(t),tT that can be found to achieve that state:

    {xj,d(t+1)=Aj(t)xj,d(t)+bj(t)uj,d(t),tT,yd(t)=cj(t)xj,d(t),tT+, (2.2)

    which implies that the IOCPs are cj(t+1)bj(t)0,tT.

    Assumption 3: The initial condition xj,k(0) fulfills the condition xj,k(0)=xj,0 for all k, where xj,0 is an arbitrarily given point. In other words, the initial state is precisely reset at the start of each iteration. Without loss of generality, we set xj,k(0)=0.

    Remark 2. Assumption 3 is a fundamental condition for ILC, which is used to guarantee the consistency of tracking in the ILC algorithm. Definitely, in industrial engineering applications, the system's initial state shift is a common problem. On the other hand, the repetitive initial state is a fundamental postulation in theoretical research on ILC. In this paper, we only consider the initial state without a shift.

    For the MASs, since the parameters of the system are unknown, by leveraging the system's repeatability and designing an appropriate iterative learning strategy, we can obtain the exact values of the unknown IOCPs on the basis of the input and output data. On the basis of the assumptions above, we can design an iterative learning mechanism for the IOCPs of unknown systems and construct the ILC law for the MASs to achieve the consensus tracking task.

    In this section, we design an appropriate iterative learning mechanism for each agent to identify the IOCPs.

    Theorem 1: For the MAS in (2.1), if each agent abides by Assumptions 2 and 3, then the IOCPs {cj(1)bj(0),cj(2)bj(1),,cj(N)bj(N1)} can be obtained by constructing an appropriate iterative learning mechanism.

    Proof: First, define the super-vectors as follows:

    uj,k=[uj,k(0),uj,k(1),,uj,k(N1)]T,
    yj,k=[yj,k(1),yj,k(2),,yj,k(N)]T.

    Thus, the system in (2.1) can be characterized as

    yj,k=Gjuj,k, (3.1)

    where Gj=(cj(1)bj(0)00cj(2)Aj(1)bj(0)cj(2)bj(1)0cj(N)N1p=1Aj(p)bj(0)cj(N)N1p=2Aj(p)bj(1)cj(N)bj(N1)).

    Next, we introduce each step of the iterative learning mechanism.

    Step 0: The control input signals of each agent are arbitrarily given as

    uj,0={uj,0(0),uj,0(1),,uj,0(N1)}, (3.2)

    and the outputs of MAS (2.1) yj,0={yj,0(1),yj,0(2),,yj,0(N)} are generated by

    yj,0=Gjuj,0. (3.3)

    Step 1: Select the control input signals uj,1={uj,1(0),uj,1(1),,uj,1(N1)} that satisfy

    uj,1={uj,0(0)+1,uj,0(1),uj,0(2),,uj,0(N1)},

    and the outputs of MAS (2.1) yj,1={yj,1(1),yj,1(2),,yj,1(N)} are generated by

    yj,1=Gjuj,1. (3.4)

    Subtracting (3.3) from (3.4) yields

    yj,1yj,0=Gj(uj,1uj,0)yj,1(1)yj,0(1)=cj(1)bj(0). (3.5)

    Step 2: Select the control input signals uj,2={uj,2(0),uj,2(1),,uj,2(N1)} that satisfy

    uj,2={uj,0(0),uj,0(1)+1,uj,0(2),uj,0(N1)}, (3.6)

    and the outputs of MAS (2.1) yj,2={yj,2(1),yj,2(2),,yj,2(N)} are generated by

    yj,2=Gjuj,2. (3.7)

    Subtracting (3.3) from (3.7) gives

    yj,2yj,0=Gj(uj,2uj,0)yj,1(2)yj,0(2)=cj(2)bj(1). (3.8)

    Step N: Select the control input signals uj,N={uj,N(0),uj,N(1),,uj,N(N1)} that satisfy

    uj,N={uj,0(0),uj,0(1),,uj,0(N1)+1}, (3.9)

    and the outputs of MAS (2.1) yj,N={yj,N(1),yj,N(2),,yj,N(N)} are generated by

    yj,N=Gjuj,N. (3.10)

    Subtracting (3.3) from (3.10) gives

    yj,Nyj,0=Gj(uj,Nuj,0)yj,N(N)yj,0(N)=cj(N)bj(N1). (3.11)

    According to the steps above, when the initial states of each agent in the system are fixed, we can obtain the IOCPs {cj(1)bj(0),cj(2)bj(1),,cj(N)bj(N1)} of each agent accurately in the MAS by applying the iterative learning mechanism.

    Remark 3. Here, for tT+, it is obvious that yj,t(t+1)yj,0(t+1)=cj(t+1)bj(t). After running the system N times, we can accurately get the unknown IOCPs of the system. This iterative learning mechanism can only be applied to systems with fixed initial states and in the absence of noise. We will continue to study other types of systems.

    In the context of iterative learning control, the control input for subsequent iterations is refined through learning from the input data and errors encountered in the preceding iteration. Let ej,k(t+1)=yd(t+1)yj,k(t+1) denote the tracking error, where yd(t+1) is the desired trajectory of the system and can be regarded as the virtual leader. For the MAS (2.1), when combined with the communication topology, the agent's consistency tracking error can be described as ξj,k(t+1), defined as

    ξj,k(t+1)=iMjai,j(yi,k(t+1)yj,k(t+1))+sj(yd(t+1)yj,k(t+1)). (3.12)

    This can be reformulated to express it in terms of tracking errors

    ξj,k(t+1)=iMjai,j(ej,k(t+1)ei,k(t+1))+sjej,k(t+1). (3.13)

    Even though the parameters of the MASs remain unknown, the previously identified unknown IOCPs of these systems are utilized. The reciprocal of the IOCPs is selected as the learning gain for designing ILC to control the MASs. In our study, we focus on the iterative learning control strategy applicable to MASs, which is designed utilizing the IOCPs as follows:

    uj,k+1(t)=uj,k(t)+1yj,t(t+1)yj,0(t+1)ξj,k(t+1),(tT+). (3.14)

    Before proving Theorem 2, it is essential to introduce the definition of the λ-norm.

    Definition: The λ-norm of the discrete-time vector function h:{1,2,,N}Rn is defined as

    ||h(t)||λ=supt{1,2,,N}{λt||h(t)||},0<λ<1,

    where |||| is a vector norm on Rn.

    Theorem 2: Suppose that the MAS (2.1) satisfies Assumptions 1–3. If the communication topology matrix of the MAS (2.1) satisfies ||(INM(PIN))||=ρ<1, then the iterative learning control strategy (3.14) can precisely track the desired trajectory.

    Proof: For convenience of the proof, the system (2.1) can be solved by the induction method

    xj,k(t)=Φj(t,0)xj,k(0)+t1s=0Φj(t,s+1)bj(s)uj,k(s), (3.15)

    where Φj(t,s) is the state transition matrix, which is determined by Aj(t), and Φj(t,s) satisfies

    Φj(t,s)=Aj(t1)Aj(t2)Aj(s),t>sandΦj(s,s)=I. (3.16)

    For the convenience of analysis, we write

    xk(t)=[x1,k(t),x2,k(t),,xM,k(t)]T,
    Φ(t,s)=diag(Φ1(t,s),Φ2(t,s),,ΦM(t,s)),
    uk(t)=[u1,k(t),u2,k(t),,uM,k(t)]T,
    yd(t)=[yd(t),yd(t),,yd(t)]T,
    yk(t)=[y1,k(t),y2,k(t),,yM,k(t)]T,
    ek(t+1)=[e1,k(t),e2,k(t),,eM,k(t)]T,
    C(t+1)=diag(c1(t+1),c2(t+1),,cM(t+1)).

    The compact representation can also be utilized to express the solution of the system (2.1)

    xk(t)=Φ(t,0)xk(0)+t1s=0Φ(t,s+1)b(s)uk(s), (3.17)

    The agent consistency tracking error (3.13) and the iterative learning control law (3.14) can be expressed as follows:

    ξk(t+1)=(PIN)ek(t+1), (3.18)
    uk+1(t)=uk(t)+H(t)(PIN)ek(t+1), (3.19)

    where H(t)=diag(1y1,t(t+1)y1,0(t+1),1y2,t(t+1)y2,0(t+1),,1yM,t(t+1)yM,0(t+1)).

    By calculating the deviation in tracking,

    ek+1(t+1)=yd(t+1)yk(t+1)=ek(t+1)c(t+1)[xk+1(t+1)xk(t+1)]=ek(t+1)C(t+1)ts=0Φ(t+1,s+1)b(s)(uk+1(s)uk(s))=ek(t+1)C(t+1)ts=0Φ(t+1,s+1)b(s)H(s)(PIN)ek(s+1). (3.20)

    By taking out the s=t item, we get

    ek+1(t+1)=ek(t+1)C(t+1)b(t)H(t)(PIN)ek(t+1)t1s=0(C(t+1)Φ(t+1,s+1)b(s))H(s)(PIN)ek(s+1)=(INM(PIN))ek(t+1)t1s=0C(t+1)Φ(t+1,s+1)b(s)H(s)(PIN)ek(s+1). (3.21)

    Taking the norm in both sides of (3.21), we have

    ||ek+1(t+1)||||(INM(PIN))||||ek(t+1)||+t1s=0||C(t+1)Φ(t,s+1)b(s)H(s)(PIN)||||ek(s+1)||. (3.22)

    We now write

    ||(INM(PIN))||=ρ,(0<ρ<1),
    supt{1,2,,N}s{01,2,,t1}||C(t+1)Φ(t,s+1)b(s)H(s)(PIN)||=σ.

    For each t{1,2,,N}, multiply by λt+1 on both sides of Eq (3.22), where 0<λ<1, which yields

    λt+1||ek+1(t+1)||ρλt+1||ek(t+1)||+σt1s=0λtsλs+1||ek(s+1)||ρλt+1||ek(t+1)||+σt1s=0λtssups{1,2,,N}{λs+1||ek(s+1)||}ρλt+1||ek(t+1)||+σλ(1λN)1λsupt{1,2,,N}{λt+1||ek(t+1)||}. (3.23)

    Therefore, we obtain

    supt{1,2,,N}{λt+1||ek+1(t+1)||}(ρ+σλ(1λN)1λ)supt{1,2,,N}{λt+1||ek(t+1)||}. (3.24)

    According the definition of the λ-norm, we have

    ||ek+1(t+1)||λ(ρ+σλ(1λN)1λ)||ek(t+1)||λ. (3.25)

    Under the condition 0<ρ<1 and if we select a sufficiently small λ, this yields

    0<(ρ+σλ(1λN)1λ)<1. (3.26)

    By (3.25) and (3.26), we have

    limk||ek+1(t+1)||λ=0. (3.27)

    This means that the agents can accurately track the desired trajectory. Therefore, if ||(INM(PIN))||=ρ<1 is guaranteed, the MAS can complete the consistency tracking task of the desired trajectory under the ILC law (3.14).

    In Part 3, for MASs with a fixed initial state and no noise, IOCPs can be accurately obtained by designing an iterative learning mechanism, but in the actual environment, the system often contains different types of noise factors. In this section, we consider the situation when the MASs incorporate measurement noise and identify the IOCPs of the system through a suitable iterative learning mechanism. Then, by integrating it into the ILC law, we aim to achieve consistent tracking of the desired trajectory for the MASs.

    We account for the inclusion of measurement noise within system (2.1) with

    {xj,k(t+1)=Aj(t)xj,k(t)+bj(t)uj,k(t),tT,yj,k(t)=cj(t)xj,k(t)+ωj,k(t),tT+,. (4.1)

    where ωj,k(t) is measurement noise that satisfies Assumption 4.

    Assumption 4: The measurement noise ωj,k(t) is independent and identically distributed. The expectation and variance of ωj,k(t) are represented as E{ωj,k(t)}=0 and E{|ωj,k(t)|2}=χ2j, respectively, and the noise signal is bounded as |ωj,k(t)|ϖj.

    To mitigate the impact of noise on the iterative learning process and bolster its robustness, this part introduces the concept of the maximum allowable control deviation, denoted δj,t, in the design of the iterative learning mechanism. The maximum allowable control deviation δj,t(tT) is defined as the difference between the maximum and minimum allowable control signals. If δj,t=0, we are unable to obtain valuable information regarding the controlled system. Therefore, we assume δj,t>0,(tT) in this part.

    Theorem 3: Assume that the MAS (4.1) is subject to the Assumptions 3 and 4. The IOCPs {cj(1)bj(0),cj(2)bj(1),,cj(N)bj(N1)} can then be captured by designing a proper iterative learning mechanism through running the system (4.1) N times. Moreover, the expectation and variance of the IOCPs are

    E{yj,t(t+1)yj,0(t+1)δj,t}=cj(t+1)bj(t),(tT+),
    E{|yj,t(t+1)yj,0(t+1)δj,tcj(t+1)bj(t)|2}=2(χj)2(δj,t)2,(tT+).

    We define the super-vectors as follows:

    ωj,k=[ωj,k(1),ωj,k(2),,ωj,k(N)]T.

    Thus, the system (4.1) can be described as

    yj,k=Gjuj,k+ωj,k. (4.2)

    The detailed process is summarized as follows:

    Step 0: The control input signals of each agent are arbitrary given as

    uj,0={uj,0(0),uj,0(1),,uj,0(N1)}, (4.3)

    Then, the outputs of the MAS (4.1) yj,0={yj,0(1),yj,0(2),,yj,0(N)} are generated by

    yj,0=Gjuj,0+ωj,0. (4.4)

    Step 1: Select the control input signals uj,1={uj,1(0)+δj,1,uj,1(1),,uj,1(N1)} that satisfy

    uj,1={uj,0(0)+δj,1,uj,0(1),uj,0(2),,uj,0(N1)}, (4.5)

    where δj,1 is the maximum allowable control deviation.

    Then, the outputs of the MAS (4.1) yj,1={yj,1(1),yj,1(2),,yj,1(N)} are generated by

    yj,1=Gjuj,1+ωj,1. (4.6)

    Subtracting (4.4) from (4.6) yields

    yj,1yj,0=Gj(uj,1uj,0)(ωj,1ωj,0),yj,1(1)yj,0(1)δj,1=cj(1)bj(0)+ωj,1(1)ωj,0(1)δj,1. (4.7)

    According to Assumption 4 of measurement noise, the mean and variance are calculated as

    {E{yj,1(1)yj,0(1)δj,1}=cj(1)bj(0),E{|yj,1(1)yj,0(1)δj,1cj(1)bj(0)|2}=2(χj)2(δj,1)2. (4.8)

    Step 2: Select the control input signals uj,2={uj,2(0),uj,2(1),,uj,2(N1)} that satisfy

    uj,2={uj,0(0),uj,0(1)+δj,2,uj,0(2),,uj,0(N1)}, (4.9)

    where δj,2 is the maximum allowable control deviation.

    Then, the outputs of the MASs (4.1) yj,2={yj,2(1),yj,2(2),,yj,2(N)} are generated by

    yj,2=Gjuj,2+ωj,2. (4.10)
    yj,2yj,0=Gj(uj,2uj,0)(ωj,2ωj,0),yj,2(2)yj,0(2)δj,2=cj(2)bj(1)+ωj,2(2)ωj,0(2)δj,2. (4.11)

    By the measurement noise limitation in Assumption 4, the mean and variance of the noise are calculated as follows:

    {E{yj,2(2)yj,0(2)δj,2}=cj(2)bj(1),E{|yj,2(s)yj,0(s)δj,2cj(2)bj(1)|2}=2(χj)2(δj,2)2.. (4.12)

    Step N: By using the method above, analogous to the case when t=N, select the control input signals uj,N={uj,N(0),uj,N(1),,uj,N(N1)} that satisfy

    uj,N={uj,N(0),uj,N(1),uj,0(2),,uj,0(N1)+δj,N},

    Then, the outputs of the MAS (4.1) yj,N={yj,N(1),yj,N(2),,yj,N(N)} are generated by

    yj,N=Gjuj,N+ωj,N. (4.13)

    Subtracting (4.4) from (4.13) yields

    yj,Nyj,0=Gj(uj,Nuj,0)(ωj,Nωj,0),yj,N(N)yj,0(N)δj,N=cj(N)bj(N1)+ωj,N(N)ωj,0(N)δj,N. (4.14)

    By the measurement noise limitation in Assumption 4, the mean and variance are calculated as

    {E{yj,N(N)yj,0(N)δj,N}=cj(N)bj(N1),E{|yj,N(N)yj,0(N)δj,Ncj(N)bj(N1)|2}=2(χj)2(δj,N)2. (4.15)

    According to the steps above, when the initial states of each agent in the system are fixed, we can obtain the IOCPs {cj(1)bj(0),cj(2)bj(1),,cj(N)bj(N1)} of each agent accurately in the MAS by applying the identification mechanism.

    Remark 4. Owing to the randomness of the noise signal, measurement of the output is challenging. Hence, we obtain IOCPs by repeating the experiment to obtain the mean value. Suppose that we repeat the experiment Q times. We can calculate the mean as the approximate value cj(t+1)bj(t), that is, cj(t+1)bj(t)=1QQq=1yqj,t(t+1)yqj,0(t+1)δj,t, where the superscript q represents the number of experiments. The accuracy of IOCPs is improved through multiple experiments.

    The ILC law (3.14), which is based on the estimated IOCPs, is utilized to regulate the MAS (4.1) with measurement noise and ensure consistent tracking of the desired trajectory. For simplicity, write it as zj(t)=E{yj,t(t+1)yj,0(t+1)δj,t}.

    Accordingto Theorem 3, we can derive

    uj,k+1(t)=uj,k(t)+1zj(t)ξj,k(t+1),(tT+). (4.16)

    Theorem 4: Assume that the iterative learning control strategy (4.16) is utilized for the MAS (4.1) under Assumptions 1–4. The tracking error is bounded if the communication topology matrix of the MAS (4.1) satisfies ||(INM(PIN))||=ρ<1.

    Proof: We calculate the solution of the system (4.1) as follows

    yj,k(t+1)=cj(t+1)Φj(t+1,0)xj,k(0)+cj(t+1)ts=0Φj(t+1,s+1)bj(s)uj,k(s)+ωj,k(t). (4.17)

    Utilizing the result of the calculation (3.17), we can represent the solution of the system (4.1) as

    yk(t+1)=c(t+1)xk(t+1)+ωk(t+1), (4.18)

    where ωk(t+1)=[ωT1,k(t+1),ωT2,k(t+1),,ωTM,k(t+1)]T.

    By computing the tracking error, we have

    ek+1(t+1)=(INMC(t+1)b(t)Z(t)(PIN))ek(t+1)t1s=0C(t+1)Φ(t+1,s+1)b(s)Z(s)(PIN)ek(s+1)(ωk+1(t+1)ωk(t+1)), (4.19)

    where Z(t)=diag(1z1(t),1z2(t),,1zM(t)).

    The demonstration adheres to the pattern of Theorem 2. Consequently, for the sake of brevity, the comprehensive steps have been excluded.

    Write Δωk+1(t+1)=(ωk+1(t+1)ωk(t+1)). On the basis of the noise from Assumption 4, we know ||Δωj,k+1(t+1)||2Nϖj, where Δωk+1(t+1) satisfies

    ||Δωk+1(t+1)||2NMϖ=˜ω. (4.20)

    Taking the norm in the both sides of (4.19) gives

    ||ek+1(t+1)||||INMc(t+1)b(t)Z(t)(PIN)||||ek(t+1)||+t1s=0||C(t+1)Φ(t+1,s+1)b(s)Z(s)(PIN)||||ek(s+1)||+˜ω (4.21)

    For each t{1,2,,N}, by multiplying by λt+1 on both sides of the inequation (4.21), with 0<λ<1, while applying the similar derivation process from (3.23) to (3.24), we get

    supt{1,2,,N}{λt+1||ek+1(t+1)||}(ρ+σ(λ(1λN)1λ))supt{1,2,,N}{λt+1||ek(t+1)||}+˜ω. (4.22)

    Let ρ1=(ρ+σ(λ(1λN)1λ)). Since the condition is 0<ρ<1, by choosing a sufficiently small λ, we have 0<ρ1<1.

    Thus, by (4.22), we have

    ||ek+1(t+1)||λρ1||ek(t+1)||λ+˜ω, (4.23)

    which implies that

    limk||ek+1(t+1)||λ˜ω1ρ1. (4.24)

    Consequently, the tracking error is confined within a certain limit, thereby concluding the proof.

    To assess the efficacy of the introduced algorithm, a linear time-varying discrete MAS composed of one virtual leader and four agents is considered.

    Example 1. The multi-agent system without measurement noise

    Replace the desired trajectory with a virtual leader yd(t)=1.1sin((πt)/20)+0.25cos((πt)/20). The parametric equations of each agent are expressed as

    {x1,k(t+1)=(10.0090010.0090.05400.27110.2370.27e2t)x1,k(t)+(000.009)u1,k(t),tTy1,k(t)=(63084+30e2t)x1,k(t),tT+ (5.1)
    {x2,k(t+1)=(10.0090010.0090.05400.27110.2370.27e2t)x2,k(t)+(000.009)u2,k(t),tTy2,k(t)=(63055+30e2t)x2,k(t),tT+ (5.2)
    {x3,k(t+1)=(10.0090010.0090.05400.27110.2370.27e2t)x3,k(t)+(000.009)u3,k(t),tTy3,k(t)=(63075+20e2t)x3,k(t),tT+ (5.3)
    {x4,k(t+1)=(10.0090010.0090.05400.27110.2370.27e2t)x4,k(t)+(000.009)u4,k(t),tTy4,k(t)=(62560+30e2t)x4,k(t),tT+ (5.4)

    where T[0,79],T+[1,80].

    As shown in Figure 1, 0 represents the virtual leader and represents the desired trajectory of the system; the remaining four are Agents 1 to 4. According to Figure 1, it is notable that Agents 1 and 4 have direct access to the desired trajectory information from the virtual leader 0. Agents 2 and 3 can indirectly obtain information by communicating with Agents 1 and 4. The degree matrix S and the Laplacian matrix L are calculated as follows:

    S=diag(0.2000.2),L=(0.40.4000.20.70.5000.20.70.5000.20.2).
    Figure 1.  Communication topology between agents.

    Thus, the convergence condition is satisfied by ||(INM(PIN))||=ρ=0.9428<1. Suppose that the IOCP identification mechanism in Section 3.1 and the ILC law (3.14) are applied to the system (3.1). The initial input states of each agent are set as x1,k(0)=x2,k(0)=x3,k(0)=x4,k(0)=0. The initial input control signals of each agent are chosen as u1,0(t)=u2,0(t)=u3,0(t)=u4,0(t)=0.1. The results are depicted in Figures 2 and 3.

    Figure 2.  Tracking errors of each agent.
    Figure 3.  Outputs of agents at (a) the 5th iteration and (b) the 40th iteration.

    As shown in Figure 2, the tracking error of each agent is demonstrated. It can be seen that under the control of the ILC law (3.14), the tracking error of each agent decreases gradually and converges to zero as the number of iteration increases.

    Figure 3(a), (b) displays the outputs of the MAS at the 5th and 40th iterations. As the number of iterations rises, the ILC law guides the output of all followers closer to the desired trajectory. After 40 iterations, all agents have achieved successful tracking of the desired trajectory. Thereby, the effectiveness of the algorithm is validated.

    Example 2. The multi-agent systems (5.1)–(5.4) with measurement noise

    In this part, we verify the effectiveness of the proposed algorithm (4.16) in the MASs (5.1)–(5.4) with measurement noise.

    On the basis of the MASs (5.1)–(5.4), the initial conditions and topology are set as in Example 1. Assume the measurement noise is subject to Gaussian distribution, with the expectation being 0 and the variance being 0.012, i.e., ωj,k(t)N(0,0.012).

    We take Agent 1 for observation to illustrate the validity of Theorem 3. In addition, in order to illustrate the influence of different maximum allowable control deviations δ1,t on the approximation errors of IOCPs, we consider three situations: δ1,t=5,10,20.

    When the measurement noise is involved, we use the mean of 100 group experiments to get the approximate IOCPs with different maximum allowable control deviations. Figure 4 shows the errors of the expected observed values compared with the real values.

    1100100q=1yqj,t(t+1)yqj,0(t+1)δj,tcj(t+1)bj(t). (5.5)
    Figure 4.  Errors of the observed values with different values of δ1,t.

    Figure 5 displays the variances of the observed values in the presence of measurement noise with different maximum allowable control deviations to evaluate the discreteness of observed values.

    1100100q=1[yqj,t(t+1)yqj,0(t+1)δj,tcj(t+1)bj(t)]2. (5.6)
    Figure 5.  Variance of the observed values with different values of δ1,t.

    The results presented in both Figures 4 and 5 clearly reveal that the approximation error of the IOCPs exhibits an inverse relationship with the permissible maximum control error. In other words, as the maximum allowable control deviation increases, the approximate error of IOCPs tends to decrease.

    In Figure 6, the tracking error of each agent is precisely displayed. Under the control of the ILC law (4.16), it can be clearly observed that the tracking error of each agent gradually converges to a fixed boundary as the number of iterations increases. This shows the effectiveness and reliability of the proposed algorithm in this paper.

    Figure 6.  Tracking error of MASs with measurement noise.

    In Figure 7(a), (b), the outputs of the MASs are demonstrated at the 5th and 40th iterations, respectively. It can be observed that the output of all followers tends towards the desired trajectory. This result of the simulation experiment is consistent with the boundedness of Theorem 4; it also illustrates that the proposed ILC (4.16) is effective.

    Figure 7.  Outputs of the agents at (a) the 5th iteration and (b) the 40th iteration.

    Example 3. Engineering practice

    To verify the feasibility of the distributed data-driven ILC algorithm, we take the Parrot AR unmanned aerial vehicle (UAV) [25] with periodic operation features as the simulation object. To conduct the yaw model identification, a step response experiment was designed and the relevant flight data were recorded. The mathematical model in transfer function form is as follows:

    ψ(s)Uψ(s)=kω2ns3+2ζωns2+ω2ns (5.7)

    The speed of the yaw angle ˙ψ of the UAV is related to the control input. The dynamic model for yaw orientation represented in state space is given as follows:

    x(t)=[ψ˙ψ¨ψ],Uψ(t)=u(t),˙x(t)=[˙ψ¨ψψ],y(t)=[100]x(t) (5.8)

    If we consider the MASs consisting of four UAVs, the state space model of each UAV is obtained as follows:

    {˙xj.k(t)=[˙ψ¨ψψ]=[0100010ω2n2ζωn]xj,k(t)+[00kω2n]Uj,k(t)yj,k(t)=[100]xj,k(t),j={1,2,3,4} (5.9)

    Set k=106.3128,ωn=15.0994,ζ=1.1002. Using the zero-order holder for discretization and setting the sampling period and sampling step as T=1s,Ts=0.05s, we get the following discrete system:

    {xj,k(t+1)=[1.00000.04680.000700.83140.02220-5.06760.0929]xj,k(t)+[0.340917.9218538.753]Uj,k(t)yj,k(t)=[100]xj,k(t)+ωj,k(t),j={1,2,3,4}. (5.10)

    Assume that the four agents have the same structure. The communication topology and initial states are identical to Example 1. It is easy to calculate ρ=0.9428<1. In addition, we set the system noise as ωj,k(t)N(0,0.022). The initial input control signal of each agent is chosen as 0.03. Let yd(t)=1.1sin((πt)/10)+0.25cos((πt)/10). The results are depicted in Figures 810.

    Figure 8.  Errors of the observed values with different values of δt.
    Figure 9.  Tracking errors of each agent.
    Figure 10.  Output of the agents at the 60th iteration.

    We consider three situations, δt=5,10,20. Figure 8 shows the errors of the expected observed values compared with the real values. Figure 9 shows the convergence of the tracking errors of the UAV systems under the ILC law (4.16). Figure 10 displays the output of each agent.

    As the number of iterations increases, the tracking error of the agents gradually tends to a fixed boundary. After 60 iterations, the output of the agents approaches the desired trajectory. Therefore, the proposed control law (4.16) can effectively control the UAV to complete the task of tracking the desired trajectory. In general, it is suitable for practical engineering practice.

    This paper investigated the distributed data-driven ILC strategy for linear time-varying MASs with unknown parameters. First, a N-step iterative learning mechanism was designed to identify the unknown IOCPs by using the repetitive control system and input and output data. Then the reciprocal of the identified IOCPs was selected as the learning gain to construct the ILC law. Second, we considered the presence of measurement noise in MASs, and the maximum allowable control deviation was introduced to minimize the adverse effect of measurement noise on identification of the IOCPs. Rigorous theoretical analysis was used to verify the effectiveness of the proposed ILC protocol. The effectiveness of the proposed method was illustrated by three examples. This investigation provides a feasible scheme for the MASs to track a trajectory.

    The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.

    This research was supported in part by the Science and Technology Program Project of Xi'an City (22GXFW0037)

    The authors declare there is no conflict of interest.



    [1] S. Arimoto, S. Kawamura, F. Miyazaki, Bettering operation of robots by learning, J. Rob. Syst., 1 (1984), 123–140. https://doi.org/10.1002/rob.4620010203 doi: 10.1002/rob.4620010203
    [2] J. Huang, W. Wang, X. Su, Adaptive iterative learning control of multiple autonomous vehicles with a time-varying reference under actuator faults, IEEE Trans. Neural Networks Learn. Syst., 32 (2021), 5512–5525. https://doi.org/10.1109/TNNLS.2021.3069209 doi: 10.1109/TNNLS.2021.3069209
    [3] S. Saab, D. Shen, M. Orabi, D. Kors, R. Jaafar, Iterative learning control: Practical implementation and automation, IEEE Trans. Ind. Electron., 69 (2022), 1858–1866. https://doi.org/10.1109/TIE.2021.3063866 doi: 10.1109/TIE.2021.3063866
    [4] L. Zhang, W. Chen, J. Liu, C. Wen, A robust adaptive iterative learning control for trajectory tracking of permanent-magnet spherical actuator, IEEE Trans. Ind. Electron., 63 (2015), 291–301. https://doi.org/10.1109/TIE.2015.2464186 doi: 10.1109/TIE.2015.2464186
    [5] D. Yang, K. Lee, H. Ahn, J. H. Lee, Experimental application of a quadratic optimal iterative learning control method for control of wafer temperature uniformity in rapid thermal processing, IEEE Trans. Semicond., 16 (2003), 36–44. https://doi.org/10.1109/TSM.2002.807740 doi: 10.1109/TSM.2002.807740
    [6] M. Zhou, J. Wang, D. Shen, Iterative learning control for continuous-time multi-agent differential inclusion systems with full learnability, Chaos, Solitons Fractals, 174 (2023), 113895. https://doi.org/10.1016/j.chaos.2023.113895 doi: 10.1016/j.chaos.2023.113895
    [7] M. Naserian, A. Ramazani, A. Khaki-Sedigh, A. Moarefianpour, Fast terminal sliding mode control for a nonlinear multi-agent robot system with disturbance, Syst. Sci. Control Eng., 8 (2020), 328–338. https://doi.org/10.1080/21642583.2020.1764408 doi: 10.1080/21642583.2020.1764408
    [8] N. Pavlov, A. Petrochenkov, A. Romodin, A multi-agent approach for modeling power-supply systems with MicroGrid, Russ. Electr. Eng., 92 (2021), 637–643. https://doi.org/10.3103/S1068371221110110 doi: 10.3103/S1068371221110110
    [9] Y. Zhang, Y. Zhou, H. Lu, H. Fujita, Cooperative multi-agent actor–critic control of traffic network flow based on edge computing, Future Gener. Comput. Syst., 123 (2021), 128–141. https://doi.org/10.1016/j.future.2021.04.018 doi: 10.1016/j.future.2021.04.018
    [10] Y. Tian, Y. Chang, F. H. Arias, C. Nieto-Granda, J. P. How, L. Carlone, Kimera-multi: Robust, distributed, densemetric-semantic SLAM for multi-robot systems, IEEE Trans. Rob., 38 (2022), 2022–2038. https://doi.org/10.1109/TRO.2021.3137751 doi: 10.1109/TRO.2021.3137751
    [11] P. Wang, M. Govindarasu, Multi-agent based attack resilient system integrity protection for smart grid, IEEE Trans. Smart Grid, 11 (2020), 3447–3456. https://doi.org/10.1109/TSG.2020.2970755 doi: 10.1109/TSG.2020.2970755
    [12] H. Ahn, Y. Chen, Iterative learning control for multi-agent formation, in 2009 Institute of Chemistry Chinese Academy of Sciences-Society of Instrument and Control Engineers, IEEE, (2009), 3111–3116.
    [13] D. Meng, Y. Jia, Formation control for multi‐agent systems through an iterative learning design approach, Int. J. Robust Nonlinear Control, 24 (2014), 340–361. https://doi.org/10.1002/rnc.2890 doi: 10.1002/rnc.2890
    [14] X. Dai, C. Wang, S. Tian, Q. Huang, Consensus control via iterative learning for distributed parameter models multi-agent systems with time-delay, J. Franklin Inst., 356 (2019), 5240–5259. https://doi.org/10.1016/j.jfranklin.2019.05.015 doi: 10.1016/j.jfranklin.2019.05.015
    [15] Y. Chen, J. Sun, Distributed optimal control for multi-agent systems with obstacle avoidance, Neurocomputing, 173 (2016), 2014–2021. https://doi.org/10.1016/j.neucom.2015.08.085 doi: 10.1016/j.neucom.2015.08.085
    [16] Z. Hou, S. Jin, A novel data-driven control approach for a class of discrete-time nonlinear systems, IEEE Trans. Control Syst. Technol., 19 (2010), 1549–1558. https://doi.org/10.1109/TCST.2010.2093136 doi: 10.1109/TCST.2010.2093136
    [17] X. Bu, Q. Yu, Z. Hou, W. Qian, Model free adaptive iterative learning consensus tracking control for a class of nonlinear multi-agent systems, IEEE Trans. Syst. Man Cybern.: Syst., 49 (2017), 677–686. https://doi.org/10.1109/TSMC.2017.2734799 doi: 10.1109/TSMC.2017.2734799
    [18] C. Liu, X. Ruan, Input-output‐driven gain‐adaptive iterative learning control for linear discrete‐time‐invariant systems, Int. J. Robust Nonlinear Control, 31 (2021), 8551–8568. https://doi.org/10.1002/rnc.5753 doi: 10.1002/rnc.5753
    [19] Y. Geng, S. Wang, X. Ruan, The convergence of data-driven optimal iterative learning control for linear multi-phase batch processes, Mathematics, 10 (2022), 2304. https://doi.org/10.3390/math10132304 doi: 10.3390/math10132304
    [20] N. Lin, R. Chi, B. Huang, Event-triggered ILC for optimal consensus at specified data points of heterogeneous networked agents with switching topologies, IEEE Trans. Cybern., 52 (2021), 8951–8961. https://doi.org/10.1109/TCYB.2021.3054421 doi: 10.1109/TCYB.2021.3054421
    [21] Y. Zhang, J. Liu, X. Ruan, Iterative learning control for uncertain nonlinear networked control systems with random packet dropout, Int. J. Robust Nonlinear Control, 29 (2019), 3529–3546. https://doi.org/10.1002/rnc.4568 doi: 10.1002/rnc.4568
    [22] J. Liu, W. Chen, X. Ruan, Y. Zheng, Data‐based iterative learning mechanism for unknown input‐output coupling parameters/matrices, Int. J. Robust Nonlinear Control, 30 (2020), 1275–1297. https://doi.org/10.1002/rnc.4827 doi: 10.1002/rnc.4827
    [23] A. Hock, A. Schoellig, Distributed iterative learning control for multi-agent systems: Theoretic developments and application to formation flying, Auton. Robot., 43 (2019), 1989–2010. https://doi.org/10.1007/s10514-019-09845-4 doi: 10.1007/s10514-019-09845-4
    [24] P. Pakshin, A. Koposov, J. Emelianova, Iterative learning control of a multiagent system under random perturbations, Autom. Remote Control, 81 (2020), 483–502. https://doi.org/10.1134/S0005117920030078 doi: 10.1134/S0005117920030078
    [25] G. Ortega, F. Muñoz, E. Quesada, L. R. Garcia, P. Ordaz, Implementation of leader-follower linear consensus algorithm for coordination of multiple aircrafts, in 2015 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), IEEE, (2015), 25–32. https://doi.org/10.1109/RED-UAS.2015.7440987
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(441) PDF downloads(25) Cited by(0)

Figures and Tables

Figures(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog