Loading [MathJax]/jax/output/SVG/jax.js
Special Issues

Finite/fixed-time synchronization for complex networks via quantized adaptive control

  • Received: 01 May 2020 Revised: 01 July 2020 Published: 23 September 2020
  • 94C30

  • In this paper, a unified theoretical method is presented to implement the finite/fixed-time synchronization control for complex networks with uncertain inner coupling. The quantized controller and the quantized adaptive controller are designed to reduce the control cost and save the channel resources, respectively. By means of the linear matrix inequalities technique, two sufficient conditions are proposed to guarantee that the synchronization error system of the complex networks is finite/fixed-time stable in virtue of the Lyapunov stability theory. Moreover, two types of setting time, which are dependent and independent on the initial values, are given respectively. Finally, the effectiveness of the control strategy is verified by a simulation example.

    Citation: Yu-Jing Shi, Yan Ma. Finite/fixed-time synchronization for complex networks via quantized adaptive control[J]. Electronic Research Archive, 2021, 29(2): 2047-2061. doi: 10.3934/era.2020104

    Related Papers:

    [1] Muhammad Farman, Ali Akgül, Kottakkaran Sooppy Nisar, Dilshad Ahmad, Aqeel Ahmad, Sarfaraz Kamangar, C Ahamed Saleel . Epidemiological analysis of fractional order COVID-19 model with Mittag-Leffler kernel. AIMS Mathematics, 2022, 7(1): 756-783. doi: 10.3934/math.2022046
    [2] Shao-Wen Yao, Muhammad Farman, Maryam Amin, Mustafa Inc, Ali Akgül, Aqeel Ahmad . Fractional order COVID-19 model with transmission rout infected through environment. AIMS Mathematics, 2022, 7(4): 5156-5174. doi: 10.3934/math.2022288
    [3] Mdi Begum Jeelani, Abeer S. Alnahdi, Mohammed A. Almalahi, Mohammed S. Abdo, Hanan A. Wahash, M. A. Abdelkawy . Study of the Atangana-Baleanu-Caputo type fractional system with a generalized Mittag-Leffler kernel. AIMS Mathematics, 2022, 7(2): 2001-2018. doi: 10.3934/math.2022115
    [4] Rahat Zarin, Amir Khan, Pushpendra Kumar, Usa Wannasingha Humphries . Fractional-order dynamics of Chagas-HIV epidemic model with different fractional operators. AIMS Mathematics, 2022, 7(10): 18897-18924. doi: 10.3934/math.20221041
    [5] Wei Fan, Kangqun Zhang . Local well-posedness results for the nonlinear fractional diffusion equation involving a Erdélyi-Kober operator. AIMS Mathematics, 2024, 9(9): 25494-25512. doi: 10.3934/math.20241245
    [6] Mohamed Houas, Kirti Kaushik, Anoop Kumar, Aziz Khan, Thabet Abdeljawad . Existence and stability results of pantograph equation with three sequential fractional derivatives. AIMS Mathematics, 2023, 8(3): 5216-5232. doi: 10.3934/math.2023262
    [7] Kottakkaran Sooppy Nisar, Aqeel Ahmad, Mustafa Inc, Muhammad Farman, Hadi Rezazadeh, Lanre Akinyemi, Muhammad Mannan Akram . Analysis of dengue transmission using fractional order scheme. AIMS Mathematics, 2022, 7(5): 8408-8429. doi: 10.3934/math.2022469
    [8] Muhammad Sajid Iqbal, Nauman Ahmed, Ali Akgül, Ali Raza, Muhammad Shahzad, Zafar Iqbal, Muhammad Rafiq, Fahd Jarad . Analysis of the fractional diarrhea model with Mittag-Leffler kernel. AIMS Mathematics, 2022, 7(7): 13000-13018. doi: 10.3934/math.2022720
    [9] Hasib Khan, Jehad Alzabut, Anwar Shah, Sina Etemad, Shahram Rezapour, Choonkil Park . A study on the fractal-fractional tobacco smoking model. AIMS Mathematics, 2022, 7(8): 13887-13909. doi: 10.3934/math.2022767
    [10] Jianhua Tang, Chuntao Yin . Analysis of the generalized fractional differential system. AIMS Mathematics, 2022, 7(5): 8654-8684. doi: 10.3934/math.2022484
  • In this paper, a unified theoretical method is presented to implement the finite/fixed-time synchronization control for complex networks with uncertain inner coupling. The quantized controller and the quantized adaptive controller are designed to reduce the control cost and save the channel resources, respectively. By means of the linear matrix inequalities technique, two sufficient conditions are proposed to guarantee that the synchronization error system of the complex networks is finite/fixed-time stable in virtue of the Lyapunov stability theory. Moreover, two types of setting time, which are dependent and independent on the initial values, are given respectively. Finally, the effectiveness of the control strategy is verified by a simulation example.



    The singular value decomposition (SVD) is a key tool in matrix theory and numerical linear algebra, and plays an important role in many areas of scientific computing and engineering applications, such as least square problem [1], data mining [2], pattern recognition [3], image and signal processing [4,5], statistics, engineering, physics and so on (see [1,2,3,4,5,6]).

    Research on the efficient numerical methods for computing the singular values of a matrix has been a hot topic, many practical algorithms have been proposed for this problem. By using the symmetric QR method to ATA, Golub and Kahan [7] presented an efficient algorithm named as Golub-Kahan SVD algorithm; Gu and Eisenstat [8] introduced a stable and efficient divide-and-conquer algorithm, called as Divide-and-Conquer algorithm as well as Bisection algorithm for computing the singular value decomposition (SVD) of a lower bidiagonal matrix, see also [1]; Drmac and Veselic [9] given the superior variant of the Jacobi algorithm and proposed a new one-sided Jacobi SVD algorithm for triangular matrices computed by revealing QR factorizations; many researchers such as Zha [10], Bojanczyk [11], Shirokov [12] and Novaković [13] came up with some methods for the problem of hyperbolic SVD; A Cross-Product Free (CPF) Jacobi-Davidson (JD) type method is proposed to compute a partial generalized singular value decomposition (GSVD) of a large matrix pair (A,B), which is referred to as the CPF-JDGSVD method [14]; many good references for these include [1,2,7,8,9,15,16,17,18,19,20,21,22,23,24,25,26] and the references therein for details.

    There are important relationships between the SVD of a matrix A and the Schur decompositions of the symmetric matrices ATA,AAT and (0ATA0). These connections to the symmetric eigenproblem allow us to adapt the mathematical and algorithmic developments of the eigenproblem to the singular value problem. So most of the algorithms mentioned above are analogs of algorithms for computing eigenvalues of symmetric matrices. All the algorithms mentioned above except for the Jacobi algorithm, had firstly to reduce A to bi-diagonal form. When the size of the matrix is large, this performance will become very costly. On the other hand, the Jacobi algorithm is rather slowly, though some modification has been added to it (see[9]).

    In some applications, such as the compressed sensing as well as the matrix completion problems [3,27] or computing the 2-norm of a matrix, only a few singular values of a large matrix are required. In these cases, it is obvious that those methods, mentioned the above, for computing the SVD is not very suitable. If only the largest singular value and the singular vectors corresponding to the largest singular value of A is needed, the power method, which is used to approximate a largest eigenpair of an n×n symmetric matrix A, should be more suitable.

    Computing the largest singular value and corresponding singular vectors of a matrix is one of the most important algorithmic tasks underlying many applications including low-rank approximation, PCA, spectral clustering, dimensionality reduction, matrix completion and topic modeling. This paper consider the problem of computing the largest singular value and singular vectors corresponding to the largest singular value of a matrix. We propose an alternating direction method, a fast general purpose method for computing the largest singular vectors of a matrix when the target matrix can only be accessed through inaccurate matrix-vector products. In the other words, the proposed method is analogous to the well-known power method, but has much better numerical behaviour than the power method. Numerical experiments show that the new method is more effective than the power method in some cases.

    The rest of the paper is organized as follows. Section 2 contains some notations and some general results that are used in subsequent sections. In Section 3 we propose the alternating direction power-method in detail and give its convergence analysis. In Section 4, we use some experiments to show the effectiveness of the new method. Finally, we end the paper with a concluding remark in Section 5.

    The following are some notations and definitions we will use later.

    We use Rm×n to denote the set of all real m×n matrices, and Rn the set of real n×1 vectors. The symbol I denotes the n×n identity matrix. For a vector xRn, x2 denotes the 2-norm of x. For a matrix ARn×n, AT is used to express the transpose of A, rank(A) is equal to the rank of a matrix A, A2 denotes the 2-norm of A and the Frobenius norm by AF is the maximum absolute value of the matrix entries of a matrix A. diag(a1,a2,,an) represents the diagonal matrix with diagonal elements a1,a2,,an.

    If ARm×n, then there exist two orthogonal matrices

    U=[u1,u2,,um]Rm×mandV=[v1,v2,,vn]Rn×n

    such that

    UTAV=(Σr000), (2.1)

    where Σr=diag(σ1,σ2,,σr), σ1σ2σr>0,r=rank(A)min{m,n}. The σi are the singular values of A and the vectors ui and vi are the ith left singular vector and the ith right singular vector respectively. And we have the SVD expansion

    A=ri=1σiuivTi.

    This is the well-known singular value decomposition (SVD) theorem[2].

    Lemma 2.1. (see Lemma 1.7 and Theorem 3.3 of [1]) Let ARm×n. Then A2=AT2=AAT2=σ1, where σ1 is the largest singular value of A.

    Lemma 2.2. (see Theorem 3.3 of [1]) Let ARm×n and σi, ui, vi, i=1,2,,r be the singular values and the corresponding singular vectors of A respectively. Then

    AATui=σ2iui,ATAvi=σ2ivi,i=1,2,,r.

    Lemma 2.3. (refer to Section 2.4 of [2] or Theorem 3.3 of [1])) Assume the matrix ARm×n has rank r>k and the SVD of A be (2.1). The matrix approximation problem minrank(Z)=kAZF has the solution

    Z=Ak=UkΣkVTk,

    where Uk=(u1,,uk), Vk=(v1,,vk) and Σk=diag(σ1,,σk).

    Let ARn×n. The power method for computing the module largest eigenvalue of A is as follows (see as Algorithm 4.1 of [1]).

    Power method:
    (1) Choose an initial vector x0Rn. For k=0,1, until convergence;
    (2) Compute yk+1=Axk;
    (3) Compute xk+1=yk+1/yk+12;
    (4) Computeλk+1=xTk+1Axk+1;
    (5) Set k=k+1 and go to (2).

    The power method is very simple and easy to implement and is applied in many applications, for example, for the PCA problem (see [28]).

    To compute the largest singular value and the corresponding singular vectors of A, we can apply the power method to AAT or ATA, without actually computing AAT or ATA.

    Power method (for largest singular value):
    (1) Choose an initial vector u0Rm and v0Rn. For k=0,1, until convergence;
    (2) Compute yk+1=A(ATuk), zk+1=AT(Avk);
    (3) Compute uk+1=yk+1/yk+12, vk+1=zk+1/zk+12;
    (4) Compute λk+1=uTk+1Auk+1 and σk+1=λk+1;
    (5) Set k=k+1 and go to (2).

    However, the power method will cost extra operations if the two singular vectors are needed and, in some cases, it converges very slowly. So, in the next section, we will propose a new iteration method for computing the largest singular value and the corresponding singular vectors of a matrix, which is similar to the power method but needs fewer operations in the iterations.

    In this section, we will introduce an alternating direction power iteration method. The new method is based on an important property of the SVD.

    From Lemma 2.3, it is known that the largest singular value σ1 and the corresponding singular vectors u1, v1 of A satisfy the following condition

    Aσ1u1vT1F=minu,vAuvTF,

    where uRm and vRn. Thus the problem of computing σ1, u1 and v1 is equivalent to solve the optimization problem

    minu,vAuvTF.

    Let f(u,v)=12AuvT2F. For the sake of simplicity, we will solve the equivalent optimization problem

    minu,vf(u,v)=minu,v12AuvT2F, (3.1)

    where uRm and vRn. However, the problem (3.1) is difficult to solve since it is not convex for u and v. Fortunately, it is convex for u individually and so is v, so we can use the alternating direction method to solve it. Alternating minimization is widely used in optimization problems due to its simplicity, low memory and flexibility (see [20,29]). In the following we apply an alternating method to solve the unconstrained optimization problem (3.1).

    Suppose vk was known, then (3.1) can be reduced to the unconstrained optimization problem

    minuf(u,vk)=minu12AuvTk2F. (3.2)

    The (3.2) can be solved by many efficient methods, such as steepest decent method, Newton method, conjugate gradient (CG) method and so on (see [29]). Because Newton method is simple and converges fast, we will choose to use the Newton method. By direct calculation, we get

    uf=(AuvT)v,Δuf=v22I.

    Then applying Newton method, we get

    uk+1=uk(Δuf)1uf=uk+1vk22(AukvTk)vk=1vk22Avk.

    Alternatively, when uk+1 is known, the problem (3.1) can be reduced to

    minvf(uk+1,v)=minv12Auk+1vT2F. (3.3)

    Also,

    vf=(AuvT)Tu,Δvf=u22I,

    it is obtained that by applying Newton method

    vk+1=vk(Δvf)1vf=vk+1uk+122(Auk+1vTk)Tuk+1=1uk+122ATuk+1.

    Solving (3.2) and (3.3) to high accuracy is both computationally expensive and of limited value if uk and vk are far from stationary points. So, in the following, we apply the two iterations alternately. Thus the alternating directional Newton method for solving (3.1) is

    {uk+1=1vk22Avkvk+1=1uk+122ATuk+1,k=0,1,, (3.4)

    where u0Rm and v0Rn are both initial guesses. At each iteration, only two matrix-vector multiplications are required and the operation costs are about 4mn, which is less than that of the power method.

    Next, the convergence analysis of (3.4) would be provided.

    Theorem 3.1 Let ARm×n and σ1, u, v be the largest singular value and the corresponding singular vectors of A respectively. If u0Rm and v0Rn are both initial guesses such that the projections on u and v are not zero, then the iteration (3.4) is convergent with

    limkukuk2=u,limkvkvk2=v,

    and

    limkuk2vk2=σ1.

    Proof. From (3.4) we can deduce that

    uk+1=1vk22Avk=uk22ATuk22AATuk,
    vk+1=1uk+122ATuk+1=vk22Avk22ATAvk.

    As in the proof of the power method (see as [2]), if the projections of u0 on u, and v0 on v are not zero, then we have

    limkukuk2=u,limkvkvk2=v.

    On the other hand, we have

    uk+12vk+12=1vk22Avk2vk22Avk22AATuk2=1Avk2AATuk2AT2=σ1.

    Thus the sequence {uk2vk2} is bounded from the above. By

    uk+12vk+12=1vk22Avk21uk+122ATuk+12,

    we can conclude that when k

    uk+122vk+12vk2=A1vk2vk2AT1uk+12uk+12Av2ATu2=σ21.

    Therefore,

    limkuk2vk2=σ1.

    We now propose the alternating direction power-method with the discussions above for computing the largest singular value and corresponding singular vectors of a matrix as follows.

    Alternating Direction Power-Method (ADPM):
    (1) Choose initial vectors v0Rn. For k=0,1, until convergence;
    (2) Compute uk+1=Avk/vk22;
    (3) Compute vk+1=ATuk+1/uk+122;
    (4) Compute μk+1=uk+12vk+12;
    (5) Set k=k+1 and go to (2).

    Here, we use several examples to show the effectiveness of the alternating direction power-method (ADPM). We compare ADPM with the power method (PM) and present numerical results in terms of the numbers of iterations (IT), CPU time (CPU) in seconds and the residue (RES), where the measurement method of CPU time in seconds is uniformly averages over multiple runs by embed MATLAB functions tic/toc at each iteration step and

    RES=abs(uk+12vk+12uk2vk2).

    The initial vectors u0 and v0 are chosen randomly by the MATLAB statements u0=rand(m,1) and v0=rand(n,1). In our implementations all iterations are performed in MATLAB (R2016a) on the same workstation with an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz that has 16GB memory and 32-bit operating system, and are terminated when the current iterate satisfies RES<e12 or the number of iterations is more than 9000, which is denoted by '-'.

    Experiment 4.1. In the first experiment, we generate random matrices with uniformly distributed elements by the MATLAB statement

    A=rand(m,n).

    For different sizes of m and n, we apply the power method and the alternating direction power-method with numerical results reported in Table 1.

    Table 1.  Numerical results of Experiment 4.1.
    m n Method IT CPU RES RATIO(%)
    500 100 PM 7 0.001340 1.4211e-13
    ADPM 4 0.000471 1.4172e-15 35.15
    500 200 PM 7 0.002753 5.6843e-14
    ADPM 4 0.000641 2.8422e-14 23.28
    1000 200 PM 7 0.015660 2.8422e-14
    ADPM 4 0.003914 1.7053e-13 24.99
    1000 500 PM 6 0.017024 2.8422e-13
    ADPM 3 0.004934 2.8422e-13 28.98
    2000 500 PM 6 0.030742 5.6843e-14
    ADPM 3 0.016660 1.1369e-13 54.19
    2000 1000 PM 6 0.060125 4.5475e-13
    ADPM 3 0.014466 1.1369e-13 24.05

     | Show Table
    DownLoad: CSV

    Experiment 4.2. In this experiment, we generate random matrices with normally distributed elements by

    A=randn(m,n).

    For different sizes of m and n, we apply the power method and the alternating direction power-method to A.

    Numerical results are reported in Table 2.

    Table 2.  Numerical results of Experiment 4.2.
    m n Method IT CPU RES RATIO (%)
    500 100 PM 711 0.084958 9.2371e-13
    ADPM 372 0.046376 9.6634e-13 54.59
    500 200 PM 652 0.084715 8.8818e-13
    ADPM 449 0.058401 9.3081e-13 68.94
    1000 200 PM 1124 0.159432 9.9476e-13
    ADPM 700 0.121118 9.7344e-13 75.96
    1000 500 PM 1288 1.122289 9.8055e-13
    ADPM 782 0.378765 9.8765e-13 33.75
    2000 500 PM 1744 3.627609 9.6634e-13
    ADPM 961 1.221796 9.0949e-13 33.68
    2000 1000 PM 4364 20.032671 9.9476e-13
    ADPM 2377 5.696365 9.0949e-13 28.42

     | Show Table
    DownLoad: CSV

    Experiment 4.3. In this experiment, we use some test matrices with size n×n from the university of Florida sparse matrix collection [30]. Numerical results are reported in Table 3.

    Table 3.  Numerical results of Experiment 4.3.
    Matrix size Method IT CPU RES RATIO(%)
    lshp1009 1009 PM 1488 0.064107 9.9298e-13
    ADPM 745 0.019718 9.9654e-13 30.76
    dwt_1005 1035 PM 40 0.005109 7.2120e-13
    ADPM 35 0.003844 6.2172e-13 75.24
    bcsstk13 2003 PM 1519 0.511403 9.7877e-13
    ADPM 842 0.174025 9.9920e-13 34.03
    dwt_2680 2680 PM 514 0.070507 1.4172e-09
    ADPM 239 0.046361 9.7167e-13 65.75
    rw5151 5151 PM 1006 0.128023 9.8632e-13
    ADPM 590 0.057257 9.7056e-13 44.72
    g7jac040 11790 PM 26 0.038169 0
    ADPM 17 0.012170 0 31.88
    epb1 14734 PM - - - -
    ADPM 6132 1.541585 9.9926e-13 -

     | Show Table
    DownLoad: CSV

    In particular, compared with the cost of the power method, we can find that the cost of the alternating direction power-method is discounted, up to 23.28%. The "RATIO", defined in the following, in the Tables 13 can show this effectiveness.

    RATIO=the CPU of the ADPM the CPU of the PM ×100%.

    In order to show numerical behave of two methods, the cost curves of the methods are clearly given, which are shown in Figures 13.

    Figure 1.  Comparison curve of the PM and ADPM methods for Example 4.1.
    Figure 2.  Comparison curve of the PM and ADPM methods for Example 4.2.
    Figure 3.  Comparison curve of the PM and ADPM methods for Example 4.3.

    From Tables 13, we can conclude that ADPM needs fewer iterations and less CPU time than the power method. So it is feasible and is effective in some cases.

    In this study, we have proposed an alternating direction power-method for computing the largest singular value and singular vector of a matrix, which is analogs to the power method but needs fewer operations in the iterations since using the technique of alternating. Convergence of the alternating direction power-method is proved under suitable conditions. Numerical experiments have shown that the alternating direction power-method is feasible and more effective than the power method in some cases.

    The authors are very much indebted to the anonymous referees for their helpful comments and suggestions which greatly improved the original manuscript of this paper. The authors are so thankful for the support from the NSF of Shanxi Province (201901D211423) and the scientific research project of Taiyuan University, China (21TYKY02).

    The authors declare that they have no conflict of interests.



    [1] Complex networks: Structure and dynamics. Phys. Rep. (2006) 424: 175-308.
    [2] Chaos synchronization in Chua's circuit. J. Circuits Systems Comput. (1993) 3: 93-108.
    [3] On identifying magnetized anomalies using geomagnetic monitoring within a magnetohydrodynamic model. Arch. Ration. Mech. Anal. (2020) 235: 691-721.
    [4] On an inverse boundary problem arising in brain imaging. J. Differential Equations (2019) 267: 2471-2502.
    [5] Distributed adaptive consensus control of nonlinear output-feedback systems on directed graphs. Automatica J. IFAC (2016) 72: 46-52.
    [6] Stabilization of linear systems with limited information. IEEE Trans. Automat. Contr. (2001) 46: 1384-1400.
    [7] Exponential synchronization of nonlinearly coupled complex networks with hybrid time-varying delays via impulsive control. Nonlinear Dynam. (2016) 85: 621-632.
    [8] Fixed-time cluster synchronization of discontinuous directed community networks via periodically or aperiodically switching control. IEEE Access (2019) 7: 83306-83318.
    [9] Distributed networked control systems: A brief overview. Inf. Sci. (2017) 380: 117-131.
    [10] Cluster synchronization in nonlinear complex networks under sliding mode control. Nonlinear Dynam. (2016) 83: 739-749.
    [11] A linear continuous feedback control of Chua's circuit. Chaos Solitons Fract. (1997) 8: 1507-1516.
    [12] Synchronization of circular restricted three body problem with Lorenz hyper chaotic system using a robust adaptive sliding mode controller. Complexity (2013) 18: 58-64.
    [13] Finite-time synchronization for a class of dynamical complex networks with nonidentical nodes and uncertain disturbance. J. Syst. Sci. Complex. (2019) 32: 818-834.
    [14] J. Li, H. Liu and S. Ma, Determining a random Schrödinger operator: Both potential and source are random, preprint, arXiv: 1906.01240.
    [15] Determining a random Schrödinger equation with unknown source and potential. SIAM J. Math. Anal. (2019) 51: 3465-3491.
    [16] Event-triggered synchronization control for complex networks with uncertain inner coupling. Int. J. Gen. Syst. (2015) 44: 212-225.
    [17] Finite-time and fixed-time cluster synchronization with or without pinning control. IEEE Trans. Cybern. (2018) 48: 240-252.
    [18] Cluster synchronization in directed networks via intermittent pinning control. IEEE Trans. Neural Netw. (2011) 22: 1009-1020.
    [19] Finite/fixed-time robust stabilization of switched discontinuous systems with disturbances. Nonlinear Dynam. (2017) 90: 2057-2068.
    [20] Finite/fixed-time pinning synchronization of complex networks with stochastic disturbances. IEEE Trans. Cybern. (2019) 49: 2398-2403.
    [21] H. Liu and G. Uhlmann, Determining both sound speed and internal source in thermo- and photo-acoustic tomography, Inverse Problems, 31 (2015), 10pp. doi: 10.1088/0266-5611/31/10/105005
    [22] Uniqueness in an inverse acoustic obstacle scattering problem for both sound-hard and sound-soft polyhedral scatterers. Inverse Problems (2006) 22: 515-524.
    [23] Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Automat. Control (2012) 57: 2106-2110.
    [24] Finite-time synchronization of multi-weighted complex dynamical networks with and without coupling delay. Neurocomputing (2018) 275: 1250-1260.
    [25] Finite-time synchronization for complex dynamic networks with semi-Markov switching topologies: An H event-triggered control scheme. Appl. Math. Comput. (2019) 356: 235-251.
    [26] Finite-time synchronization of networks via quantized intermittent pinning control. IEEE Trans. Cybern. (2018) 48: 3021-3027.
    [27] Finite-time stabilization of switched dynamical networks with quantized couplings via quantized controller. Sci. China Technol. Sci. (2018) 61: 299-308.
    [28] W. Yin, W. Yang and H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 18pp. doi: 10.1016/j.jcp.2020.109594
    [29] D. Zhang, Y. Guo, J. Li and H. Liu, Retrieval of acoustic sources from multi-frequency phaseless data, Inverse Problems, 34 (2018), 21pp. doi: 10.1088/1361-6420/aaccda
    [30] Fixed-time synchronization criteria for complex networks via quantized pinning control. ISA Trans. (2019) 91: 151-156.
    [31] Fixed-time stochastic synchronization of complex networks via continuous control. IEEE Trans. Cybern. (2019) 49: 3099-3104.
    [32] Finite-time synchronization of complex-valued neural networks with mixed delays and uncertain perturbations. Neural Process. Lett. (2017) 46: 271-291.
  • This article has been cited by:

    1. Muhammad Farman, Ali Hasan, Muhammad Sultan, Aqeel Ahmad, Ali Akgül, Faryal Chaudhry, Mohammed Zakarya, Wedad Albalawi, Wajaree Weera, Yellow virus epidemiological analysis in red chili plants using Mittag-Leffler kernel, 2023, 66, 11100168, 811, 10.1016/j.aej.2022.10.064
    2. Muhammad Farman, Saba Jamil, Muhammad Bilal Riaz, Muhammad Azeem, Muhammad Umer Saleem, Numerical and quantitative analysis of HIV/AIDS model with modified Atangana-Baleanu in Caputo sense derivative, 2023, 66, 11100168, 31, 10.1016/j.aej.2022.11.034
    3. Dumitru Baleanu, Manijeh Hasanabadi, Asadollah Mahmoudzadeh Vaziri, Amin Jajarmi, A new intervention strategy for an HIV/AIDS transmission by a general fractional modeling and an optimal control approach, 2023, 167, 09600779, 113078, 10.1016/j.chaos.2022.113078
    4. Assad Sajjad, Muhammad Farman, Ali Hasan, Kottakkaran Sooppy Nisar, Transmission dynamics of fractional order yellow virus in red chili plants with the Caputo–Fabrizio operator, 2023, 207, 03784754, 347, 10.1016/j.matcom.2023.01.004
    5. Yanru Wu, Monireh Nosrati Sahlan, Hojjat Afshari, Maryam Atapour, Ardashir Mohammadzadeh, On the existence, uniqueness, stability, and numerical aspects for a novel mathematical model of HIV/AIDS transmission by a fractal fractional order derivative, 2024, 2024, 1029-242X, 10.1186/s13660-024-03098-1
    6. Muhammad Umer Saleem, Muhammad Farman, Kottakkaran Sooppy Nisar, Aqeel Ahmad, Zainab Munir, Evren Hincal, Muhammad Aqeel, Investigation and application of a classical piecewise hybrid with a fractional derivative for the epidemic model: Dynamical transmission and modeling, 2024, 19, 1932-6203, e0307732, 10.1371/journal.pone.0307732
    7. Muhammad Farman, Ali Akgül, Harish Garg, Dumitru Baleanu, Evren Hincal, Sundas Shahzeen, Mathematical analysis and dynamical transmission of monkeypox virus model with fractional operator, 2023, 0266-4720, 10.1111/exsy.13475
    8. Kottakkaran Sooppy Nisar, Muhammad Farman, Mahmoud Abdel-Aty, Chokalingam Ravichandran, A review of fractional order epidemic models for life sciences problems: Past, present and future, 2024, 95, 11100168, 283, 10.1016/j.aej.2024.03.059
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3055) PDF downloads(378) Cited by(5)

Figures and Tables

Figures(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog