Processing math: 59%
Research article Special Issues

Parameter estimation and fractional derivatives of dengue transmission model

  • In this paper, we propose a parameter estimation of dengue fever transmission model using a particle swarm optimization method. This method is applied to estimate the parameters of the host-vector and SIR type dengue transmission models by using cumulative data of dengue patient in East Java province, Indonesia. Based on the parameter values, the basic reproduction number of both models are greater than one and obtained their value for SIR is R0=1.4159 and for vector host is R0=1.1474. We then formulate the models in fractional Atangana-Baleanu derivative that possess the property of nonlocal and nonsingular kernel that has been remained effective to many real-life problems. A numerical procedure for the solution of the model SIR model is shown. Some specific numerical values are considered to obtain the graphical results for both the SIR and Vector Host model. We show that the model vector host provide good results for data fitting than that of the SIR model.

    Citation: Windarto, Muhammad Altaf Khan, Fatmawati. Parameter estimation and fractional derivatives of dengue transmission model[J]. AIMS Mathematics, 2020, 5(3): 2758-2779. doi: 10.3934/math.2020178

    Related Papers:

    [1] Huiling Wang, Nian-Ci Wu, Yufeng Nie . Two accelerated gradient-based iteration methods for solving the Sylvester matrix equation AX + XB = C. AIMS Mathematics, 2024, 9(12): 34734-34752. doi: 10.3934/math.20241654
    [2] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [3] Nunthakarn Boonruangkan, Pattrawut Chansangiam . Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Mathematics, 2021, 6(8): 8477-8496. doi: 10.3934/math.2021492
    [4] Changzhou Li, Chao Yuan, Shiliang Chen . On positive definite solutions of the matrix equation Xmi=1AiXpiAi=Q. AIMS Mathematics, 2024, 9(9): 25532-25544. doi: 10.3934/math.20241247
    [5] Huiling Wang, Zhaolu Tian, Yufeng Nie . The HSS splitting hierarchical identification algorithms for solving the Sylvester matrix equation. AIMS Mathematics, 2025, 10(6): 13476-13497. doi: 10.3934/math.2025605
    [6] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [7] Chanchal Garodia, Izhar Uddin . A new fixed point algorithm for finding the solution of a delay differential equation. AIMS Mathematics, 2020, 5(4): 3182-3200. doi: 10.3934/math.2020205
    [8] Lale Cona . Convergence and data dependence results of the nonlinear Volterra integral equation by the Picard's three step iteration. AIMS Mathematics, 2024, 9(7): 18048-18063. doi: 10.3934/math.2024880
    [9] Xiaopeng Yi, Chongyang Liu, Huey Tyng Cheong, Kok Lay Teo, Song Wang . A third-order numerical method for solving fractional ordinary differential equations. AIMS Mathematics, 2024, 9(8): 21125-21143. doi: 10.3934/math.20241026
    [10] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
  • In this paper, we propose a parameter estimation of dengue fever transmission model using a particle swarm optimization method. This method is applied to estimate the parameters of the host-vector and SIR type dengue transmission models by using cumulative data of dengue patient in East Java province, Indonesia. Based on the parameter values, the basic reproduction number of both models are greater than one and obtained their value for SIR is R0=1.4159 and for vector host is R0=1.1474. We then formulate the models in fractional Atangana-Baleanu derivative that possess the property of nonlocal and nonsingular kernel that has been remained effective to many real-life problems. A numerical procedure for the solution of the model SIR model is shown. Some specific numerical values are considered to obtain the graphical results for both the SIR and Vector Host model. We show that the model vector host provide good results for data fitting than that of the SIR model.


    The Sylvester equation

    AX+XB=C (1.1)

    appears frequently in many areas of applied mathematics. We refer readers to the elegant survey by Bhatia and Rosenthal [1] and the references therein for the history of the Sylvester equation and many interesting and important theoretical results. The Sylvester equation is important in a number of applications such as matrix eigenvalue decompositions [2,3], control theory [3,4,5], model reduction [6,7,8,9], physics mathematics to construct exact solutions of nonlinear integrable equations [10], feature problemss of slice semi-regular functions [11] and the numerical solution of the matrix differential Riccati equations [12,13,14]. There are several numerical algorithms to compute the solution of the Sylvester equation. The standard ones are the Bartels Stewart algorithm [15] and the Hessenberg Schur method first described by Enright [14], but more often attributed to Golub, Nash and Van Loan [16]. Other computationally efficient approaches for the case that both A and B are stable, i.e., both A and B have all their eigenvalues in the open left half plane, are the sign function method [17], Smith method [18] and ADI iteration methods [19,20,21,22]. All these methods are efficient for the small size of the dense matrices A and B.

    The recent interest is directed more towards the large and sparse matrices A and B, and C with low rank. For the dense A and B, the approach based on the sign function method is suggested in [23] that exploits the low rank structure of C. This approach is further used in [24] in order to solve the large scale Sylvester equation with sparse A and B, i.e., the matrices A and B can be represented by O(nlog(n)) data. Problems for the sensitivity of the solution of the Sylvester equation are also widely studied. There are several books that contain the results for these problems [25,26,27].

    In this paper, we focus our attention on the multiple constrained least squares solution of the Sylvester equation, that is, the following multiple constrained least squares problem:

    minXT=X,LXU,λmin(X)ε>0f(X)=12AX+XBC2 (1.2)

    where A,B,C,L and U are given n×n real matrices, X is a n×n real symmetric matrix which we wish to find, λmin(X) represents the smallest eigenvalue of the symmetric matrix X, and ε is a given positive constant. The inequality XY, for any two real matrices, means that XijYij, here Xij and Yij denote the ijth entries of the matrices X and Y, respectively.

    Multiple constrained conditions least squares estimations of matrices are widely used in mathematical economics, statistical data analysis, image reconstruction, recommendation problems and so on. They differ from the ordinary least squares problems, and the estimated matrices are usually required to be symmetric positive definite, bounded and, sometimes, to have some special construction patterns. For example, in the dynamic equilibrium model of economy [28], one needs to estimate an aggregate demand function derived from second order analysis of the utility function of individuals. The formulation of this problem is to find the least squares solution of the matrix equation AX=B, where A and B are given, the fitting matrix X is a symmetric and bounded matrix, and the smallest eigenvalue is no less than a specified positive number since, in the neighborhood of equilibrium, the approximate of the utility function is a quadratic and strictly concave with Hessian matrix. Other examples discussed in [29,30] are respectively to find a symmetric positive definite patterned matrix closest to a sample covariance matrix and to find a symmetric and diagonally dominant matrices with positive diagonal matrix closest to a given matrix. Based on the above analysis, we have a strong motivation to study the multiple constrained least squares problem (1.2).

    In this paper, we first transform the multiple constrained least squares (1.2) into an equivalent constrained optimization problem. Then, we give the necessary and sufficient conditions for the existence of a solution to the equivalent constrained optimization problem. Noting that the alternating direction method of multipliers (ADMM) is one-step iterative method, we propose a multi-step alternating direction method of multipliers (MSADMM) to the multiple constrained least squares (1.2), and analyze the global convergence of the proposed algorithm. We will give some numerical examples to illustrate the effectiveness of the proposed algorithm to the multiple constrained least squares (1.2) and list some problems that should be studied in the near future. We also give some numerical comparisons between MSADMM, ADMM and ADMM with Anderson acceleration (ACADMM).

    Throughout this paper, Rm×n, SRn×n and SRn×n0 denote the set of m×n real matrices, n×n symmetric matrices and n×n symmetric positive semidefinite matrices, respectively. In stands for the n×n identity matrix. A+ denotes a matrix with ijth entry equal to max{0,Aij}. The inner product in space Rm×n defined as A,B=tr(ATB)=ijAijBij for all A,BRm×n, and the associated norm is Frobenius norm denoted by A. PΩ(X) denotes the projection of the matrix X onto the constrained matrix set Ω, that is PΩ(X)=argminZΩZX.

    In this section, we give an existence theorem for a solution of the multiple constrained least squares problem (1.2) and some theoretical results for the optimization problems which are useful for discussions in the next sections.

    Theorem 2.1. The matrix ˜X is a solution of the multiple constrained least squares problem (1.2) if and only if there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that the following conditions (2.1)–(2.4) are satisfied.

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0, (2.1)
    ˜Λ1,˜XL=0,˜XL0,˜Λ10, (2.2)
    ˜Λ2,U˜X=0,U˜X0,˜Λ20, (2.3)
    ˜Λ3+˜ΛT3,˜XεIn=0,˜XεInSRn×n0,˜Λ3+˜ΛT3SRn×n0. (2.4)

    Proof. Obviously, the multiple constrained least squares problem (1.2) can be rewritten as

    minXSRn×nF(X)=12AX+XBC2s.t. XL0,UX0,XεInSRn×n0. (2.5)

    Then, if ˜X is a solution to the constrained optimization problem (2.5), ˜X certainly satisfies KKT conditions of the constrained optimization problem (2.5), and hence of the multiple constrained least squares problem (1.2). That is, there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied.

    Conversely, assume that there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied. Let

    ˉF(X)=F(X)˜Λ1,XL˜Λ2,UX˜Λ3+˜ΛT32,XεIn,

    then, for any matrix WSRn×n, we have

    ˉF(˜X+W)=12A(˜X+W)+(˜X+W)BC2   ˜Λ1,˜X+WL˜Λ2,U˜XW˜Λ3+˜ΛT32,˜X+WεIn=ˉF(˜X)+12AW+WB2+AW+WB,A˜X+˜XBC   ˜Λ1,W+˜Λ2,W˜Λ3+˜ΛT32,W=ˉF(˜X)+12AW+WB2   +W,AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=ˉF(˜X)+12AW+WB2ˉF(˜X).

    This implies that ˜X is a global minimizer of the function ˉF(X) with XSRn×n. Since

    ˜Λ1,˜XL=0,˜Λ2,U˜X=0,˜Λ3+˜ΛT3,˜XεIn=0,

    and ˉF(X)ˉF(˜X) holds for all XSRn×n, we have

    F(X)F(˜X)+˜Λ1,XL+˜Λ2,UX+˜Λ3+˜ΛT32,XεIn.

    Noting that ˜Λ10,˜Λ20 and ˜Λ3+˜ΛT3SRn×n0, then F(X)F(˜X) holds for all X with XL0, UX0 and XεInSRn×n0. Hence, ˜X is a solution of the constrained optimization problem (2.5), that is, ˜X a solution of the multiple constrained least squares problem (1.2).

    Lemma 2.1. [31] Assume that ˜x is a solution of the optimization problem

    minf(x)  s.t.xΩ,

    where f(x) is a continuously differentiable function, Ω is a closed convex set, then

    f(˜x),x˜x0,xΩ.

    Lemma 2.2. [31] Assume that (˜x1,˜x2,...,˜xn) is a solution of the optimization problem

    minni=1fi(xi)  s.t.ni=1Aixi=b,xiΩi,i=1,2,...,n (2.6)

    where fi(xi)(i=1,2,...,n) are continuously differentiable functions, Ωi(i=1,2,...,n) are closed convex sets, then

    xif(˜xi)ATi˜λ,xi˜xi0,xiΩi,i=1,2,...,n,

    where ˜λ is a solution to the dual problem of (2.6).

    Lemma 2.3. Assume that Ω={XRn×n:LXU}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    (M)+,YL=0,(M)+,UY=0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZRn×n,ZL0,UZ0Z(YM)2, (2.7)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.7). That is, there exist matrices ˜Λ10 and ˜Λ20 such that

    ˜ZY+M˜Λ1+˜Λ2=0,˜Λ1,˜ZL=0,˜Λ2,U˜Z=0,˜ZL0,U˜Z0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M˜Λ1+˜Λ2=0,˜Λ1,YL=0,˜Λ2,UY=0,YL0,UY0.

    So we have from the above conditions that (˜Λ1)ij(˜Λ2)ij=0 when LijUij, and (˜Λ1)ij and (˜Λ2)ij can be arbitrarily selected as (˜Λ1)ij0 and (˜Λ2)ij0 when Lij=Uij. Noting that M=˜Λ1˜Λ2, ˜Λ1 and ˜Λ2 can be selected as ˜Λ1=(M)+ and ˜Λ2=(M)+. Hence, the results hold.

    Lemma 2.4. Assume that Ω={XRn×n:XT=X,λmin(X)ε>0}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    M+MT,YεIn=0,M+MTSRn×n0,YεInSRn×n0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZSRn×n,ZεInSRn×n0Z(YM)2, (2.8)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.8). That is, there exists a matrix ˜ΛSRn×n0 such that

    ˜ZY+M˜Λ+(˜ZY+M˜Λ)T=0,˜Λ,˜ZεIn=0,˜ZεInSRn×n0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M+MT2˜Λ=0,˜Λ,YεIn=0,YεInSRn×n0.

    Hence, the results hold.

    In this section we give a multi-step alternating direction method of multipliers (MSADM) tothe multiple constrained least squares problem (1.2). Obviously, the multiple constrained least squares problem (1.2) is equivalent to the following constrained optimization problem

    minXF(X)=12AX+XBC2,s.t. XY=0,XZ=0,      XRn×n,      YΩ1={YRn×n:LYU},      ZΩ2={ZRn×n:ZT=Z,λmin(Z)ε>0}. (3.1)

    The Lagrange function, augmented Lagrangian function and dual problem to the constrained optimization problem (3.1) are, respectively,

    L(X,Y,Z,M,N)=F(X)M,XYN,XZ, (3.2)
    Lα(X,Y,Z,M,N)=F(X)+α2XYM/α2+α2XZN/α2, (3.3)
    maxM,NRn×ninfXRn×n,YΩ1,ZΩ2L(X,Y,Z,M,N), (3.4)

    where M and N are Lagrangian multipliers and α is penalty parameter.

    The alternating direction method of multipliers [32,33] to the constrained optimization problem (3.1) can be described as the following Algorithm 3.1.

    Algorithm 3.1. ADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0 and penalty parameter α>0. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           (a)Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(b)Yk+1=argminYΩ1Lα(Xk+1,Y,Zk,Mk,Nk)=PΩ1(Xk+1Mk/α),(c)Zk+1=argminZΩ2Lα(Xk+1,Yk+1,Z,Mk,Nk)=PΩ2(Xk+1Nk/α),(d)Mk+1=Mkα(Xk+1Yk+1),(e)Nk+1=Nkα(Xk+1Zk+1);(3.5)
    Step 3. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 4. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Alternating direction method of multipliers (ADMM) has been well studied in the context of the linearly constrained convex optimization. In the last few years, we have witnessed a number of novel applications arising from image processing, compressive sensing and statistics, etc. ADMM is a splitting version of the augmented Lagrange method (ALM) where the ALM subproblem is decomposed into multiple subproblems at each iteration, and thus the variables can be solved separably in alternating order. ADMM, in fact, is one-step iterative method, that is, the current iterates is obtained by the information only from the previous step, and the convergence rate of ADMM is only linear, which was proved in [33]. In this paper we propose a multi-step alternating direction method of multipliers (MSADMM), which is more effective than ADMM, to the constrained optimization problem (3.1). The iterative pattern of MSADMM can be described as the following Algorithm 3.2.

    Algorithm 3.2. MSADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and correction factor γ(0,2). Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. ADMM step
          (a)˜Xk=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(3.6)
          (b)˜Mk=Mkα(˜XkYk),(3.7)
          (c)˜Nk=Nkα(˜XkZk),(3.8)
          (d)˜Yk=argminYΩ1Lα(˜Xk,Y,Zk,˜Mk,˜Nk)=PΩ1(˜Xk˜Mk/α),(3.9)
          (e)˜Zk=argminZΩ2Lα(˜Xk,˜Yk,Z,˜Mk,˜Nk)=PΩ2(˜Xk˜Nk/α);(3.10)
    Step 3. Correction step
          (a)Yk+1=Ykγ(Yk˜Yk),    (3.11)
          (b)Zk+1=Zkγ(Zk˜Zk),    (3.12)
          (c)Mk+1=Mkγ(Mk˜Mk).    (3.13)
          (d)Nk+1=Nkγ(Nk˜Nk);    (3.14)
    Step 4. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, ˜Xk is an approximate solution of problem (3.1);
    Step 5. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Compared to ADMM, MSADMM yields the new iterate in the order XMNYZ with the difference in the order of XYZMN. Despite this difference, MSADMM and ADMM are equally effective to exploit the separable structure of (3.1) and equally easy to implement. In fact, the resulting subproblems of these two methods are of the same degree of decomposition and they are of the same difficulty. We shall verify by numerical experiments that these two methods are also equally competitive in numerical senses, and that, if we choose the correction factor γ suitably, MSADMM is more efficient than ADMM.

    In this section, we prove the global convergence and the O(1/t) convergence rate for the proposed Algorithm 3.2. We start the proof with some lemmas which are useful for the analysis of coming theorems.

    To simplify our analysis, we use the following notations throughout this section.

    Ω=Rn×n×Ω1×Ω2×Rn×n×Rn×n;V=(YZMN);W=(XV);
    G(W)=(F(X)MNMNXYXZ);Q=(αIn0In00αIn0InIn01αIn00In01αIn).

    Lemma 4.1. Assume that (X,Y,Z) is a solution of problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk} and {˜Wk} are generated by Algorithm 3.2, then we have

    VkV,Q(Vk˜Vk)Vk˜Vk,Q(Vk˜Vk). (4.1)

    Proof. By (3.6)–(3.10) and Lemma 2.1, we have, for any (X,Y,Z,M,N)Ω, that

    (X˜XkY˜YkZ˜ZkM˜MkN˜Nk),(F(˜Xk)˜Mk˜Nk˜Mk˜Nk˜Xk˜Yk˜Xk˜Zk)+(0000αIn0In00αIn0InIn01αIn00In01αIn)(˜YkYk˜ZkZk˜MkMk˜NkNk)0, (4.2)

    or compactly,

    W˜Wk,G(˜Wk)+(0Q)(˜VkVk)0. (4.3)

    Choosing W as W=(XV), then (4.3) can be rewritten as

    ˜VkV,Q(Vk˜Vk)˜WkW,G(˜Wk).

    Noting that the monotonicity of the gradients of the convex functions, we have by Lemma 2.2 that

    ˜WkW,G(˜Wk)˜WkW,G(W)0.

    Therefore, the above two inequalities imply that

    ˜VkV,Q(Vk˜Vk)0,

    from which the assertion (4.1) is immediately derived.

    Noting that the matrix Q is a symmetric and positive semi-definite matrix, we use, for convenience, the notation

    Vk˜VkQ:=Vk˜Vk,Q(Vk˜Vk).

    Then, the assertion (4.1) can be rewritten as

    VkV,Q(Vk˜Vk)Vk˜Vk2Q. (4.4)

    Lemma 4.2. Assume that (X,Y,Z) is a solution of the constrained optimization problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk},{˜Vk} are generated by Algorithm 3.2. Then, we have

    Vk+1V2QVkV2Qγ(2γ)Vk˜Vk2Q. (4.5)

    Proof. By elementary manipulation, we obtain

    Vk+1V2Q=(VkV)γ(Vk˜Vk)2Q=VkV2Q2γVkV,Q(Vk˜Vk)+γ2Vk˜Vk2QVkV2Q2γVk˜Vk2Q+γ2Vk˜Vk2Q=VkV2Qγ(2γ)Vk˜Vk2Q,

    where the inequality follows from (4.1) and (4.4).

    Lemma 4.3. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 satisfy

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q (4.6)

    for any (X,Y,Z,M,N)Ω.

    Proof. By (4.2) or its compact form (4.3), we have, for any (X,Y,Z,M,N)Ω, that

    W˜Wk,G(˜Wk)W˜Wk,(0Q)(˜VkVk)=V˜Vk,Q(Vk˜Vk). (4.7)

    Thus, it suffices to show that

    V˜Vk,Q(Vk˜Vk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q. (4.8)

    By using the formula 2a,Qb=a2Q+b2Qab2Q, we derive that

    VVk+1,Q(VkVk+1)=12VVk+12Q+12VkVk+12Q12VVk2Q. (4.9)

    Moreover, since (3.11)–(3.14) can be rewritten as (VkVk+1)=γ(Vk˜Vk), we have

    VVk+1,Q(Vk˜Vk)=1γVVk+1,Q(VkVk+1). (4.10)

    Combining (4.9) and (4.10), we obtain

    VVk+1,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q. (4.11)

    On the other hand, we have by using (3.11)–(3.14) that

    Vk+1˜Vk,Q(Vk˜Vk)=(1γ)Vk˜Vk2Q. (4.12)

    By adding (4.11) and (4.12), and again using the fact that (VkVk+1)=γ(Vk˜Vk), we obtain that

    V˜Vk,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q+(1γ)Vk˜Vk2Q=12γ(VVk+12QVVk2Q)+(1γ2)Vk˜Vk2Q

    which is equivalent to (4.8). Hence, the lemma is proved.

    Theorem 4.1. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 are bounded, and furthermore, any accumulation point ˜X of the sequence {˜Xk} is a solution of problem (1.2).

    Proof. The inequality (4.5) with the restriction γ(0,2) implies that

    (ⅰ) limkVk˜VkQ=0;

    (ⅱ) VkVQ is bounded upper.

    Recall that the matrix Q is symmetric and positive semi-definite. Thus, we have by the assertion (ⅰ) that Q(Vk˜Vk)=0 (k) which, together with (3.7) and (3.8), imply that ˜Yk=˜Xk=˜Zk (k). The assertion (ⅱ) implies that the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded. Equations (3.11)–(3.14) hold, and the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded imply the sequence {˜Yk}, {˜Zk}, {˜Mk} and {˜Nk} are also bounded. Hence, by the clustering theorem and together with (3.11)–(3.14), there exist subsequences {˜Xk}K, {˜Yk}K, {˜Zk}K, {˜Mk}K, {˜Nk}K, {Yk}K, {Zk}K, {Mk}K and {Nk}K such that

    limk,kK˜Xk=˜X,limk,kK˜Yk=limk,kKYk=˜Y,limk,kK˜Zk=limk,kKZk=˜Z,
    limk,kK˜Mk=limk,kKMk=˜M,limk,kK˜Nk=limk,kKNk=˜N.

    Furthermore, we have by (3.7) and (3.8) that

    ˜X=˜Y=˜Z. (4.13)

    By (3.6)–(3.8), we have

    AT(A˜Xk+˜XkBC)+(A˜Xk+˜XkBC)BT˜Mk˜Nk=0.

    So we have

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜M˜N=0. (4.14)

    Let k,kK, we have by (3.9), (3.10) and (4.13) that

    ˜X=PΩ1(˜X˜M/α),˜X=PΩ2(˜X˜N/α). (4.15)

    Noting that α>0, we have by (4.15), Lemma 2.3 and Lemma 2.4 that

    ˜M+,˜XL=0,˜XL0, (4.16)
    (˜M)+,U˜X=0,U˜X0, (4.17)

    and

    ˜N+˜NT,˜XεI=0,˜N+˜NTSRn×n0,˜XεISRn×n0. (4.18)

    Let

    ˜Λ1=(˜M)+,˜Λ2=(˜M)+,˜Λ3=˜N,

    we have by (4.14) and (4.16)–(4.18) that

    {AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0˜Λ1,˜XL=0,˜XL0,˜Λ10,˜Λ2,U˜X=0,U˜X0,˜Λ20,˜Λ3+˜ΛT3,˜XεI=0,˜XεISRn×n0,˜Λ3+˜ΛT3SRn×n0. (4.19)

    Hence, we have by Theorem 2.1 that ˜X is a solution of problem (1.2).

    Theorem 4.2. Let the sequences {˜Wk} be generated by Algorithm 3.2. For an integer t>0, let

    ˜Wt=1t+1tk=0˜Wk, (4.20)

    then 1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω and the inequality

    ˜WtW,G(W)12γ(t+1)VV02Q (4.21)

    holds for any (X,Y,Z,M,N)Ω.

    Proof. First, for any integer t>0, we have (˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω for k=0,1,2,,t. Since 1t+1tk=0˜Wk can be viewed as a convex combination of ˜Wks, we obtain

    1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω.

    Second, since γ(0,2), it follows from Lemma 4.3 that

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)0,(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω. (4.22)

    By combining the monotonicity of G(W) with the inequality (4.22), we have

    W˜Wk,G(W)+12γ(VVk2QVVk+12Q)0,(X,Y,Z,M,N)Ω.

    Summing the above inequality over k=0,1,,t, we derive that

    (t+1)Wtk=0˜Wk,G(W)+12γ(VV02QVVt+12Q)0,(X,Y,Z,M,N)Ω.

    By dropping the minus term, we have

    (t+1)Wtk=0˜Wk,G(W)+12γVV02Q0,(X,Y,Z,M,N)Ω.

    which is equivalent to

    1t+1tk=0˜WkW,G(W)12γ(t+1)VV02Q,(X,Y,Z,M,N)Ω.

    The proof is completed.

    Noting that problem (3.1) is equivalent to find (X,Y,Z,M,N)Ω such that the following inequality

    WW,G(W)0 (4.23)

    holds for any (X,Y,Z,M,N)Ω. Theorem 4.2 means that, for any initial matrices Y0,Z0,M0,N0Rn×n, the point ˜Wt defined in (4.20) satisfies

    ˜WtW,G(W)VV02Q2γ(t+1),

    which means the point ˜Wt is an approximate solution of (4.23) with the accuracy O(1/t). That is, the convergence rate O(1/t) of the Algorithm 3.2 is established in an ergodic sense.

    In this section, we present some numerical examples to illustrate the convergence of MSADMM to the constrained least squares problem (1.2). All tests were performed by Matlab 7 with 64-bit Windows 7 operating system. In all tests, the constant ε=0.1, matrices L with all elements are 1 and U with all elements are 3. The matrices A,B and C are randomly generated, i.e., generated in Matlab style as A=randn(n,n), B=randn(n,n), C=randn(n,n). In all algorithms, the initial matrices are chosen as the null matrices. The maximum number of inner iterations and out iterations are restricted to 5000. The error tolerance εout=εin=109 in Algorithms 3.1 and 3.2. The computational methods of the projection PΩi(X)(i=1,2) are as follows[38].

    PΩ1(X)={Xij,ifLijXijUijUij,ifXij>UijLij,ifXij<Lij,PΩ2(X)=Wdiag(d1,d2,,dn)WT

    where

    di={λi(X+XT2),ifλi(X+XT2)εε,ifλi(X+XT2)<ε

    and W is such that X+XT2=WΔWT is spectral decomposition, i.e., WTW=I and Δ=diag(λ1(X+XT2),λ2(X+XT2),,λn(X+XT2)). We use LSQR algorithm described in [34] with necessary modifications to solve the subproblems (3.5) in Algorithm 3.1, (3.6) in Algorithm 3.2 and (6.1) in Algorithm 6.2.

    The LSQR algorithm is an effective method to solve consistent linear matrix equation or least square problem of inconsistent linear matrix equation. Using this iterative algorithm, for any initial matrix, a solution can be obtained within finite iteration steps if exact arithmetic is used. In addition, using this iterative algorithm, a solution with minimum Frobenius norm can be obtained by choosing a special kind of initial matrix, and a solution which is nearest to given matrix in Frobenius norm can be obtained by first finding minimum Frobenius norm solution of a new consistent matrix equation. The LSQR algorithm to solve the subproblems (3.5), (3.6) and (6.1) can be described as follows:

    Algorithm 5.1. LSQR algorithm to solve subproblems (3.5) and (3.6).
    Step 1. Input matrices A,B,C,Yk,Zk,Mk and Nk, penalty parameter α>0 and error tolerance εin. Compute
          η1=(C2+αYk+Mkα2+αZk+Nkα2)1/2,U(1)1=Cη1,U(2)1=αYk+Mkαη1,U(3)1=αZk+Nkαη1,ξ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1,Γ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1ξ1,Φ1=Γ1,ˉϕ=η1,ˉρ1=ξ1.Leti=:1;
    Step 2. Compute
          ηi+1=(AΓi+ΓiBξiU(1)i2+αΓiξiU(2)i2+αΓiξiU(3)i2)1/2,U(1)i+1=AΓi+ΓiBξiU(1)iηi+1,U(2)i+1=αΓiξiU(2)iηi+1,U(3)i+1=αΓiξiU(3)iηi+1,ξi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γi,Γi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γiξi+1,ρi=(ˉρ2i+η2i+1)1/2,ci=ˉρiρi,si=ηi+1ρi,θi+1=siξi+1,ˉρi+1=ciξi+1,ϕi=ciˉϕi,ˉϕi+1=siˉϕi,Xi+1=Xi+ϕiρiΦi,Φi+1=Γi+1θi+1ρiΦi;
    Step 3. If Xi+1Xi<εin, terminate the execution of the algorithm. (In this case, Xi is a solution of problem (3.5) or (3.6));
    Step 4. Let i=:i+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Table 1 reports the average computing time (CPU) of 10 tests of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with penalty parameter α=n. Figures 14 report the computing time of ADMM with the same size of problem and different penalty parameters α. Figures 58 report the computing time of MSADMM with the same size of problem and different correction factor γ. Figure 9 reports the computing time curve of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with different matrix size.

    Table 1.  Numerical comparisons between MSADMM and ADMM.
    α= n ADMM MSADMM(γ=0.8) MSADMM(γ=1.0) MSADMM(γ=1.5)
    20 0.0846 0.1254 0.0865 0.0538
    40 0.3935 0.4883 0.3836 0.2676
    80 1.3370 1.6930 1.3726 0.8965
    100 2.4766 3.1514 2.5015 1.6488
    150 5.8780 7.3482 5.8742 3.9154
    200 11.6398 14.6023 11.6162 7.6576
    300 35.1929 41.4151 32.9488 21.4898
    400 83.7386 108.9807 86.0472 56.8144
    500 147.5759 183.4758 144.2038 92.3137
    600 242.0408 302.6395 241.6225 157.8042

     | Show Table
    DownLoad: CSV
    Figure 1.  Computing times (seconds) vs. the values of α for n = 40.
    Figure 2.  Computing times (seconds) vs. the values of α for n = 60.
    Figure 3.  Computing times (seconds) vs. the values of α for n = 80.
    Figure 4.  Computing times (seconds) vs. the values of \alpha for n = 100.
    Figure 5.  Computing times (seconds) vs. the values of \gamma for \alpha = n = 40.
    Figure 6.  Computing times (seconds) vs. the values of \gamma for \alpha = n = 60.
    Figure 7.  Computing times (seconds) vs. the values of \gamma for \alpha = n = 80.
    Figure 8.  Computing times (seconds) vs. the values of \gamma for \alpha = n = 100.
    Figure 9.  Numerical comparisons between ADMM and MSADMM.

    Based on the tests reported in Table 1, Figures 19 and many other performed unreported tests which show similar patterns, we have the following results:

    Remark 5.1. The convergence speed of ADMM is directly related to the penalty parameter \alpha . In general, the penalty parameter \alpha in this paper can be chosen as \alpha\approx n (see Figures 14). However, how to select the best penalty parameter \alpha is an important problem should be studied future time.

    Remark 5.2. The convergence speed of MSADMM is direct relation to the penalty parameter \alpha and the correction factor \gamma . The selection of the penalty parameter \alpha is similar to ADMM since MSADMM is a direct extension of ADMM. For the correction factor \gamma , as showed in Table 1 and Figures 58, aggressive values such as \gamma\approx 1.5 are often preferred. However, how to select the best correction factor \gamma is also an important problem should be studied future time.

    Remark 5.3. As showed in Table 1 and Figure 9, MSADMM, with the correction factor \gamma\approx 1.5 and the penalty parameter \alpha be chosen as the same as ADMM, is more effective than ADMM.

    Anderson acceleration, or Anderson mixing, was initially developed in 1965 by Donald Anderson [35] as an iterative procedure to solve some nonlinear integral equations arising in physics. It turns out that the Anderson acceleration is very efficient to solve other types of nonlinear equations as well, see [36,37,38], and the literature cited therein. When Anderson acceleration is applied to the equation f(x) = g(x)-x = 0 , the iterative pattern can be described as the following Algorithm 6.1.

    Algorithm 6.1. Anderson accelerated method to solve the equation f(x)=0 .
    Given x_0 \in R^n and an integer m \geq 1 this algorithm produces a sequence x_k of iterates intended to converge to a fixed point of the function g:R^n\rightarrow R^n
    Step 1. Compute x_1=g(x_0) ;
    Step 2. For k=1, 2, \cdots until convergence;
    Step 3. Let m_k=\min(m, k) ;
    Step 4. Compute \lambda_k=(\lambda_1, \lambda_2, \cdots, \lambda_{m_k})^T\in\mathbb{R}^{m_k} that solves
           \min\limits_{\lambda\in R^{m_k}}\|f(x_k)-\sum\limits_{j=1}^{m_k}\lambda_j(f(x_{k-m_k+j})-f(x_{k-m_k+j-1}))\|_2^2;
    Step 5. Set
           x_{k+1}=g(x_k)+\sum\limits_{j=1}^{m_k-1}\lambda_j[g(x_{k-m_k+j+1})-g(x_{k-m_k+j})].

     | Show Table
    DownLoad: CSV

    In this we define the following matrix functions

    f(Y,Z,M,N) = g(Y,Z,M,N)-(Y,Z,M,N),

    where g(Y_k, Z_k, M_k, N_k) = (Y_{k+1}, Z_{k+1}, M_{k+1}, N_{k+1}) with Y_{k+1}, Z_{k+1}, M_{k+1} and N_{k+1} are computed by (b)–(e) in Algorithm 3.1, and let f_k = f(Y_k, Z_k, M_k, N_k), g_k = g(Y_k, Z_k, M_k, N_k) , then Algorithm 3.1 with Anderson acceleration can be described as the following Algorithm 6.2.

    Algorithm 6.2. Algorithm 3.1 with Anderson acceleration to solve problem (3.1).
    Step 1. Input matrices A, B, C, L and U . Input constant \varepsilon > 0 , error tolerance \varepsilon_{out} > 0 , penalty parameter \alpha > 0 and integer m \geq 1 . Choose initial matrices Y_0, Z_0, M_0, N_0\in R^{n\times n} . Let k=:0 ;
    Step 2. Compute
            \begin{eqnarray} X_{k+1}=arg\min_{X\in R^{n\times n}}L_{\alpha}(X, Y_k, Z_k, M_k, N_k); \end{eqnarray} \; \; \; \; \; \; \; \; \; \; \; (6.1)
    Step 3. Let m_k=\min(m, k) ;
    Step 4. Compute \lambda_k=(\lambda_1, \lambda_2, \cdots, \lambda_{m_k})^T\in\mathbb{R}^{m_k} that solves
            \min\limits_{\lambda\in R^{m_k}}\|f_k-\sum\limits_{j=1}^{m_k}\lambda_j(f_{k-m_k+j+1}-f_{k-m_k+j})\|^2;
    Step 5. Set
            \begin{eqnarray*} (Y_{k+1}, Z_{k+1}, M_{k+1}, N_{k+1})=g_k+\sum_{j=1}^{m_k-1}\lambda_j(g_{k-m_k+j+1}-g_{k-m_k-j}); \end{eqnarray*}
    Step 6. If (\|Y_{k+1}-Y_k\|^2+\|Z_{k+1}-Z_k\|^2+\|M_{k+1}-M_k\|^2+\|N_{k+1}-N_k\|^2)^{1/2}\leq \varepsilon_{out} , stop. In this case, X_{k+1} is an approximate solution of problem (3.1);
    Step 7. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Algorithm 6.2 (ACADMM) is m -step iterative method, that is, the current iterates is obtained by the linear combination of the previous m steps. Furthermore, the combination coefficients of the linear combination are modified at each iteration steps. Compared to ACADMM, Algorithm 3.2 (MSADMM) is two-step iterative method and the combination coefficients of the linear combination are fixed at each iteration steps. The convergence speed of ACADMM is directly related to the penalty parameter \alpha and the backtracking step m . The selection of the penalty parameter \alpha is the same as ADMM since ACADMM's iterates are corrected by ADMM's iterates. For the backtracking step m , as showed in Table 2 (the average computing time of 10 tests) and Figure 10, aggressive values such as m = 10 are often preferred (in this case, ACADMM is more efficient than MSADMM). However, how to select the best backtracking step m is an important problem which should be studied in near future.

    Table 2.  Numerical comparisons between MSADMM and ACADMM.
    \alpha = n MSADMM( \gamma = 1.5 ) ACADMM( m = 2 ) ACADMM( m = 10 ) ACADMM( m = 20 )
    20 0.0735 0.0991 0.0521 0.0647
    40 0.2595 0.3485 0.2186 0.2660
    80 1.1866 1.1319 1.0232 1.1069
    100 1.7750 2.2081 1.6003 1.7660
    150 3.8474 5.2760 3.5700 3.9587
    200 7.6133 10.8719 6.9807 7.7318
    300 21.2970 28.5379 18.8233 19.9526
    400 56.2133 74.3192 44.8087 46.5275
    500 98.0542 130.2326 75.6044 80.1480
    600 157.7573 208.1124 125.2842 133.4549

     | Show Table
    DownLoad: CSV
    Figure 10.  Numerical comparisons between MSADMM and ACADMM.

    In this paper, the multiple constraint least squares solution of the Sylvester equation AX+XB = C is discussed. The necessary and sufficient conditions for the existence of solutions to the considered problem are given (Theorem 2.1). MSADMM to solve the considered problem is proposed and some convergence results of the proposed algorithm are proved (Theorem 4.1 and Theorem 4.2). Problems which should be studied in the near future are listed. Numerical experiments show that MSADMM with a suitable correction factor \gamma is more effective than ADMM (See Table 1 and Figure 10), and ACADMM with a suitable backtracdking step m is the most effective of ADMM, MSADMM and ACADMM (See Table 2 and Figure 10).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by National Natural Science Foundation of China (grant number 11961012) and Special Research Project for Guangxi Young Innovative Talents (grant number AD20297063).

    The authors declare no competing interests.



    [1] World Health Organization, Fact sheet on the Dengue and severe dengue, WHO, 2017. Available from: http://www.who.int/mediacentre/factsheets/fs117/en/.
    [2] World Health Organization, Global strategy for dengue prevention and control 2012-2020, WHO, 2012. Available from: http://www.who.int/denguecontrol/9789241504034/en/.
    [3] Health Office (Dinas Kesehatan) of the East Java, Dinas Kesehatan Provinsi Jawa Timur, Surabaya, Indonesia, 2009.
    [4] Ministry of Health Republic of Indonesia, Kementerian Kesehatan Republik Indonesia Jakarta, 2014.
    [5] D. Aldila, T. Gotz, E. Soewono, An optimal control problem arising from a dengue disease transmission model, Math. Biosci., 242 (2013), 9-16. doi: 10.1016/j.mbs.2012.11.014
    [6] H. Tasman, A. K. Supriatna, N. Nuraini, et al. A dengue vaccination model for immigrants in a two-age-class population, Int. J. Math., 2012 (2012), 1-15.
    [7] J. P. Chavez, T. Gotz, S. Siegmund, et al. An SIR-Dengue transmission model with seasonal effects and impulsive control, Math. Biosci., 289 (2017), 29-39. doi: 10.1016/j.mbs.2017.04.005
    [8] A. Pandey, A. Mubayi, J. Medlock, Comparing vector-host and SIR models for dengue transmission, Math. Biosci., 246 (2013), 252-259. doi: 10.1016/j.mbs.2013.10.007
    [9] T. Gotz, N. Altmeier, W. Bock, et al. Modeling dengue data from Semarang, Indonesia, Ecol. Complex., 30 (2017), 57-62. doi: 10.1016/j.ecocom.2016.12.010
    [10] F. B. Agusto, M. A. Khan, Optimal control strategies for dengue transmission in pakistan, Math. Biosci., 305 (2018), 102-121. doi: 10.1016/j.mbs.2018.09.007
    [11] N. Tutkun, Parameter estimation in mathematical models using the real coded genetic algorithms, Expert Syst. Appl., 36 (2009), 3342-3345. doi: 10.1016/j.eswa.2008.01.060
    [12] R. L. Haupt, S. E. Haupt, Practical Genetic Algorithms, Second Edition, John Wiley & Sons, 2004.
    [13] W. B. Roush, S. L. Branton, A Comparison of fitting growth models with a genetic algorithm and nonlinear regression, Poultry Sci., 84 (2005), 494-502. doi: 10.1093/ps/84.3.494
    [14] W. S. W. Indratno, N. Nuraini, E. Soewono, A comparison of binary and continuous genetic algorithm in parameter estimation of a logistic growth model, American Institute of Physics Conference Series, 1587 (2014), 139-142.
    [15] Windarto, An Implementation of continuous genetic algorithm in parameter estimation of predator-prey model, AIP Conference Proceedings, 2016.
    [16] D. Akman, O. Akman, E. Schaefer, Parameter Estimation in Ordinary Differential Equations Modeling via Particle Swarm Optimization, J. Appl. Math., 2018 (2018), 1-9.
    [17] Windarto, Eridani, U. D. Purwati, A comparison of continuous genetic algorithm and particle swarm optimization in parameter estimation of Gompertz growth model, AIP Conference Proceedings, 2084 (2019), 020017.
    [18] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1995.
    [19] K. Diethelm, A fractional calculus based model for the simulation of an outbreak of dengue fever, Nonlinear Dyn., 71 (2013), 613-619. doi: 10.1007/s11071-012-0475-2
    [20] T. Sardar, S. Rana, J. Chattopadhyay, A mathematical model of dengue transmission with memory, Commun. Nonlinear Sci., 22 (2015), 511-525. doi: 10.1016/j.cnsns.2014.08.009
    [21] M. A. Khan, S. Ullah, M. Farooq, A new fractional model for tuberculosis with relapse via Atangana-Baleanu derivative, Chaos, Solitons & Fractals, 116 (2018), 227-238.
    [22] S. Ullah, M. A. Khan, M. Farooq, Modeling and analysis of the fractional HBV model with Atangana-Baleanu derivative, The European Physical Journal Plus, 133 (2018), 313.
    [23] E. O. Alzahrani, M. A. Khan, Modeling the dynamics of Hepatitis E with optimal control, Chaos, Solitons & Fractals, 116 (2018), 287-301.
    [24] Fatmawati, E. M. Shaiful, M. I. Utoyo, A fractional order model for HIV dynamics in a two-sex population, Int. J. Math., 2018 (2018), 1-11.
    [25] Fatmawati, M. A. Khan, M. Azizah, et al. A fractional model for the dynamics of competition between commercial and rural banks in Indonesia, Chaos, Solitons & Fractals, 122 (2019), 32-46.
    [26] A. Atangana, D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: theory and application to heat transfer model, Therm. Sci., 20 (2016), 763-769. doi: 10.2298/TSCI160111018A
    [27] S. Qureshi, A. Atangana, Mathematical analysis of dengue fever outbreak by novel fractional operators with field data, Physica A, 526 (2019), 121127.
    [28] O. Diekmann, J. A. P. Heesterbeek, J. A. J. Metz, On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogenous populations, J. Math. Biol., 28 (1990), 362-382.
    [29] O. Diekmann, J. A. P. Heesterbeek, Mathematical Epidemiology of Infectious Diseases, Model Building, Analysis and Interpretation, John Wiley & Son, 2000.
    [30] P. van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmition, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6
    [31] Wikipedia contributors: East Java, Wikipedia. Available from: https://en.wikipedia.org/wiki/East-Java.
    [32] M. Toufik, A. Atangana, New numerical approximation of fractional derivative with non-local and non-singular kernel: application to chaotic models, Eur. Phys. J. Plus, 132 (2017), 444.
  • This article has been cited by:

    1. Shougui Zhang, Xiyong Cui, Guihua Xiong, Ruisheng Ran, An Optimal ADMM for Unilateral Obstacle Problems, 2024, 12, 2227-7390, 1901, 10.3390/math12121901
    2. Mohammad Khorsand Zak, Abbas Abbaszadeh Shahri, A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation, 2025, 13, 2227-7390, 318, 10.3390/math13020318
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6519) PDF downloads(571) Cited by(27)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog