Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Fixed points of non-linear multivalued graphic contractions with applications

  • In this paper, a novel and more general type of sequence of non-linear multivalued mappings as well as the corresponding contractions on a metric space equipped with a graph is initiated. Fixed point results of a single-valued mapping and the new sequence of multivalued mappings are examined under suitable conditions. A non-trivial comparative illustration is provided to support the assumptions of our main theorem. A few important results in ϵ-chainable metric space and cyclic contractions are deduced as some consequences of the concepts obtained herein. As a result of our findings, new criteria for solving a broader form of Fredholm integral equation are established. An open problem concerning discretized population balance model whose solution may be investigated using any of the ideas proposed in this note is highlighted as a future assignment.

    Citation: Mohammed Shehu Shagari, Trad Alotaibi, Hassen Aydi, Choonkil Park. Fixed points of non-linear multivalued graphic contractions with applications[J]. AIMS Mathematics, 2022, 7(11): 20164-20177. doi: 10.3934/math.20221103

    Related Papers:

    [1] Huiling Wang, Nian-Ci Wu, Yufeng Nie . Two accelerated gradient-based iteration methods for solving the Sylvester matrix equation AX + XB = C. AIMS Mathematics, 2024, 9(12): 34734-34752. doi: 10.3934/math.20241654
    [2] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [3] Nunthakarn Boonruangkan, Pattrawut Chansangiam . Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Mathematics, 2021, 6(8): 8477-8496. doi: 10.3934/math.2021492
    [4] Changzhou Li, Chao Yuan, Shiliang Chen . On positive definite solutions of the matrix equation Xmi=1AiXpiAi=Q. AIMS Mathematics, 2024, 9(9): 25532-25544. doi: 10.3934/math.20241247
    [5] Huiling Wang, Zhaolu Tian, Yufeng Nie . The HSS splitting hierarchical identification algorithms for solving the Sylvester matrix equation. AIMS Mathematics, 2025, 10(6): 13476-13497. doi: 10.3934/math.2025605
    [6] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [7] Chanchal Garodia, Izhar Uddin . A new fixed point algorithm for finding the solution of a delay differential equation. AIMS Mathematics, 2020, 5(4): 3182-3200. doi: 10.3934/math.2020205
    [8] Lale Cona . Convergence and data dependence results of the nonlinear Volterra integral equation by the Picard's three step iteration. AIMS Mathematics, 2024, 9(7): 18048-18063. doi: 10.3934/math.2024880
    [9] Xiaopeng Yi, Chongyang Liu, Huey Tyng Cheong, Kok Lay Teo, Song Wang . A third-order numerical method for solving fractional ordinary differential equations. AIMS Mathematics, 2024, 9(8): 21125-21143. doi: 10.3934/math.20241026
    [10] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
  • In this paper, a novel and more general type of sequence of non-linear multivalued mappings as well as the corresponding contractions on a metric space equipped with a graph is initiated. Fixed point results of a single-valued mapping and the new sequence of multivalued mappings are examined under suitable conditions. A non-trivial comparative illustration is provided to support the assumptions of our main theorem. A few important results in ϵ-chainable metric space and cyclic contractions are deduced as some consequences of the concepts obtained herein. As a result of our findings, new criteria for solving a broader form of Fredholm integral equation are established. An open problem concerning discretized population balance model whose solution may be investigated using any of the ideas proposed in this note is highlighted as a future assignment.



    The Sylvester equation

    AX+XB=C (1.1)

    appears frequently in many areas of applied mathematics. We refer readers to the elegant survey by Bhatia and Rosenthal [1] and the references therein for the history of the Sylvester equation and many interesting and important theoretical results. The Sylvester equation is important in a number of applications such as matrix eigenvalue decompositions [2,3], control theory [3,4,5], model reduction [6,7,8,9], physics mathematics to construct exact solutions of nonlinear integrable equations [10], feature problemss of slice semi-regular functions [11] and the numerical solution of the matrix differential Riccati equations [12,13,14]. There are several numerical algorithms to compute the solution of the Sylvester equation. The standard ones are the Bartels Stewart algorithm [15] and the Hessenberg Schur method first described by Enright [14], but more often attributed to Golub, Nash and Van Loan [16]. Other computationally efficient approaches for the case that both A and B are stable, i.e., both A and B have all their eigenvalues in the open left half plane, are the sign function method [17], Smith method [18] and ADI iteration methods [19,20,21,22]. All these methods are efficient for the small size of the dense matrices A and B.

    The recent interest is directed more towards the large and sparse matrices A and B, and C with low rank. For the dense A and B, the approach based on the sign function method is suggested in [23] that exploits the low rank structure of C. This approach is further used in [24] in order to solve the large scale Sylvester equation with sparse A and B, i.e., the matrices A and B can be represented by O(nlog(n)) data. Problems for the sensitivity of the solution of the Sylvester equation are also widely studied. There are several books that contain the results for these problems [25,26,27].

    In this paper, we focus our attention on the multiple constrained least squares solution of the Sylvester equation, that is, the following multiple constrained least squares problem:

    minXT=X,LXU,λmin(X)ε>0f(X)=12AX+XBC2 (1.2)

    where A,B,C,L and U are given n×n real matrices, X is a n×n real symmetric matrix which we wish to find, λmin(X) represents the smallest eigenvalue of the symmetric matrix X, and ε is a given positive constant. The inequality XY, for any two real matrices, means that XijYij, here Xij and Yij denote the ijth entries of the matrices X and Y, respectively.

    Multiple constrained conditions least squares estimations of matrices are widely used in mathematical economics, statistical data analysis, image reconstruction, recommendation problems and so on. They differ from the ordinary least squares problems, and the estimated matrices are usually required to be symmetric positive definite, bounded and, sometimes, to have some special construction patterns. For example, in the dynamic equilibrium model of economy [28], one needs to estimate an aggregate demand function derived from second order analysis of the utility function of individuals. The formulation of this problem is to find the least squares solution of the matrix equation AX=B, where A and B are given, the fitting matrix X is a symmetric and bounded matrix, and the smallest eigenvalue is no less than a specified positive number since, in the neighborhood of equilibrium, the approximate of the utility function is a quadratic and strictly concave with Hessian matrix. Other examples discussed in [29,30] are respectively to find a symmetric positive definite patterned matrix closest to a sample covariance matrix and to find a symmetric and diagonally dominant matrices with positive diagonal matrix closest to a given matrix. Based on the above analysis, we have a strong motivation to study the multiple constrained least squares problem (1.2).

    In this paper, we first transform the multiple constrained least squares (1.2) into an equivalent constrained optimization problem. Then, we give the necessary and sufficient conditions for the existence of a solution to the equivalent constrained optimization problem. Noting that the alternating direction method of multipliers (ADMM) is one-step iterative method, we propose a multi-step alternating direction method of multipliers (MSADMM) to the multiple constrained least squares (1.2), and analyze the global convergence of the proposed algorithm. We will give some numerical examples to illustrate the effectiveness of the proposed algorithm to the multiple constrained least squares (1.2) and list some problems that should be studied in the near future. We also give some numerical comparisons between MSADMM, ADMM and ADMM with Anderson acceleration (ACADMM).

    Throughout this paper, Rm×n, SRn×n and SRn×n0 denote the set of m×n real matrices, n×n symmetric matrices and n×n symmetric positive semidefinite matrices, respectively. In stands for the n×n identity matrix. A+ denotes a matrix with ijth entry equal to max{0,Aij}. The inner product in space Rm×n defined as A,B=tr(ATB)=ijAijBij for all A,BRm×n, and the associated norm is Frobenius norm denoted by A. PΩ(X) denotes the projection of the matrix X onto the constrained matrix set Ω, that is PΩ(X)=argminZΩZX.

    In this section, we give an existence theorem for a solution of the multiple constrained least squares problem (1.2) and some theoretical results for the optimization problems which are useful for discussions in the next sections.

    Theorem 2.1. The matrix ˜X is a solution of the multiple constrained least squares problem (1.2) if and only if there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that the following conditions (2.1)–(2.4) are satisfied.

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0, (2.1)
    ˜Λ1,˜XL=0,˜XL0,˜Λ10, (2.2)
    ˜Λ2,U˜X=0,U˜X0,˜Λ20, (2.3)
    ˜Λ3+˜ΛT3,˜XεIn=0,˜XεInSRn×n0,˜Λ3+˜ΛT3SRn×n0. (2.4)

    Proof. Obviously, the multiple constrained least squares problem (1.2) can be rewritten as

    minXSRn×nF(X)=12AX+XBC2s.t. XL0,UX0,XεInSRn×n0. (2.5)

    Then, if ˜X is a solution to the constrained optimization problem (2.5), ˜X certainly satisfies KKT conditions of the constrained optimization problem (2.5), and hence of the multiple constrained least squares problem (1.2). That is, there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied.

    Conversely, assume that there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied. Let

    ˉF(X)=F(X)˜Λ1,XL˜Λ2,UX˜Λ3+˜ΛT32,XεIn,

    then, for any matrix WSRn×n, we have

    ˉF(˜X+W)=12A(˜X+W)+(˜X+W)BC2   ˜Λ1,˜X+WL˜Λ2,U˜XW˜Λ3+˜ΛT32,˜X+WεIn=ˉF(˜X)+12AW+WB2+AW+WB,A˜X+˜XBC   ˜Λ1,W+˜Λ2,W˜Λ3+˜ΛT32,W=ˉF(˜X)+12AW+WB2   +W,AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=ˉF(˜X)+12AW+WB2ˉF(˜X).

    This implies that ˜X is a global minimizer of the function ˉF(X) with XSRn×n. Since

    ˜Λ1,˜XL=0,˜Λ2,U˜X=0,˜Λ3+˜ΛT3,˜XεIn=0,

    and ˉF(X)ˉF(˜X) holds for all XSRn×n, we have

    F(X)F(˜X)+˜Λ1,XL+˜Λ2,UX+˜Λ3+˜ΛT32,XεIn.

    Noting that ˜Λ10,˜Λ20 and ˜Λ3+˜ΛT3SRn×n0, then F(X)F(˜X) holds for all X with XL0, UX0 and XεInSRn×n0. Hence, ˜X is a solution of the constrained optimization problem (2.5), that is, ˜X a solution of the multiple constrained least squares problem (1.2).

    Lemma 2.1. [31] Assume that ˜x is a solution of the optimization problem

    minf(x)  s.t.xΩ,

    where f(x) is a continuously differentiable function, Ω is a closed convex set, then

    f(˜x),x˜x0,xΩ.

    Lemma 2.2. [31] Assume that (˜x1,˜x2,...,˜xn) is a solution of the optimization problem

    minni=1fi(xi)  s.t.ni=1Aixi=b,xiΩi,i=1,2,...,n (2.6)

    where fi(xi)(i=1,2,...,n) are continuously differentiable functions, Ωi(i=1,2,...,n) are closed convex sets, then

    xif(˜xi)ATi˜λ,xi˜xi0,xiΩi,i=1,2,...,n,

    where ˜λ is a solution to the dual problem of (2.6).

    Lemma 2.3. Assume that Ω={XRn×n:LXU}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    (M)+,YL=0,(M)+,UY=0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZRn×n,ZL0,UZ0Z(YM)2, (2.7)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.7). That is, there exist matrices ˜Λ10 and ˜Λ20 such that

    ˜ZY+M˜Λ1+˜Λ2=0,˜Λ1,˜ZL=0,˜Λ2,U˜Z=0,˜ZL0,U˜Z0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M˜Λ1+˜Λ2=0,˜Λ1,YL=0,˜Λ2,UY=0,YL0,UY0.

    So we have from the above conditions that (˜Λ1)ij(˜Λ2)ij=0 when LijUij, and (˜Λ1)ij and (˜Λ2)ij can be arbitrarily selected as (˜Λ1)ij0 and (˜Λ2)ij0 when Lij=Uij. Noting that M=˜Λ1˜Λ2, ˜Λ1 and ˜Λ2 can be selected as ˜Λ1=(M)+ and ˜Λ2=(M)+. Hence, the results hold.

    Lemma 2.4. Assume that Ω={XRn×n:XT=X,λmin(X)ε>0}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    M+MT,YεIn=0,M+MTSRn×n0,YεInSRn×n0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZSRn×n,ZεInSRn×n0Z(YM)2, (2.8)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.8). That is, there exists a matrix ˜ΛSRn×n0 such that

    ˜ZY+M˜Λ+(˜ZY+M˜Λ)T=0,˜Λ,˜ZεIn=0,˜ZεInSRn×n0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M+MT2˜Λ=0,˜Λ,YεIn=0,YεInSRn×n0.

    Hence, the results hold.

    In this section we give a multi-step alternating direction method of multipliers (MSADM) tothe multiple constrained least squares problem (1.2). Obviously, the multiple constrained least squares problem (1.2) is equivalent to the following constrained optimization problem

    minXF(X)=12AX+XBC2,s.t. XY=0,XZ=0,      XRn×n,      YΩ1={YRn×n:LYU},      ZΩ2={ZRn×n:ZT=Z,λmin(Z)ε>0}. (3.1)

    The Lagrange function, augmented Lagrangian function and dual problem to the constrained optimization problem (3.1) are, respectively,

    L(X,Y,Z,M,N)=F(X)M,XYN,XZ, (3.2)
    Lα(X,Y,Z,M,N)=F(X)+α2XYM/α2+α2XZN/α2, (3.3)
    maxM,NRn×ninfXRn×n,YΩ1,ZΩ2L(X,Y,Z,M,N), (3.4)

    where M and N are Lagrangian multipliers and α is penalty parameter.

    The alternating direction method of multipliers [32,33] to the constrained optimization problem (3.1) can be described as the following Algorithm 3.1.

    Algorithm 3.1. ADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0 and penalty parameter α>0. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           (a)Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(b)Yk+1=argminYΩ1Lα(Xk+1,Y,Zk,Mk,Nk)=PΩ1(Xk+1Mk/α),(c)Zk+1=argminZΩ2Lα(Xk+1,Yk+1,Z,Mk,Nk)=PΩ2(Xk+1Nk/α),(d)Mk+1=Mkα(Xk+1Yk+1),(e)Nk+1=Nkα(Xk+1Zk+1);(3.5)
    Step 3. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 4. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Alternating direction method of multipliers (ADMM) has been well studied in the context of the linearly constrained convex optimization. In the last few years, we have witnessed a number of novel applications arising from image processing, compressive sensing and statistics, etc. ADMM is a splitting version of the augmented Lagrange method (ALM) where the ALM subproblem is decomposed into multiple subproblems at each iteration, and thus the variables can be solved separably in alternating order. ADMM, in fact, is one-step iterative method, that is, the current iterates is obtained by the information only from the previous step, and the convergence rate of ADMM is only linear, which was proved in [33]. In this paper we propose a multi-step alternating direction method of multipliers (MSADMM), which is more effective than ADMM, to the constrained optimization problem (3.1). The iterative pattern of MSADMM can be described as the following Algorithm 3.2.

    Algorithm 3.2. MSADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and correction factor γ(0,2). Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. ADMM step
          (a)˜Xk=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(3.6)
          (b)˜Mk=Mkα(˜XkYk),(3.7)
          (c)˜Nk=Nkα(˜XkZk),(3.8)
          (d)˜Yk=argminYΩ1Lα(˜Xk,Y,Zk,˜Mk,˜Nk)=PΩ1(˜Xk˜Mk/α),(3.9)
          (e)˜Zk=argminZΩ2Lα(˜Xk,˜Yk,Z,˜Mk,˜Nk)=PΩ2(˜Xk˜Nk/α);(3.10)
    Step 3. Correction step
          (a)Yk+1=Ykγ(Yk˜Yk),    (3.11)
          (b)Zk+1=Zkγ(Zk˜Zk),    (3.12)
          (c)Mk+1=Mkγ(Mk˜Mk).    (3.13)
          (d)Nk+1=Nkγ(Nk˜Nk);    (3.14)
    Step 4. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, ˜Xk is an approximate solution of problem (3.1);
    Step 5. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Compared to ADMM, MSADMM yields the new iterate in the order XMNYZ with the difference in the order of XYZMN. Despite this difference, MSADMM and ADMM are equally effective to exploit the separable structure of (3.1) and equally easy to implement. In fact, the resulting subproblems of these two methods are of the same degree of decomposition and they are of the same difficulty. We shall verify by numerical experiments that these two methods are also equally competitive in numerical senses, and that, if we choose the correction factor γ suitably, MSADMM is more efficient than ADMM.

    In this section, we prove the global convergence and the O(1/t) convergence rate for the proposed Algorithm 3.2. We start the proof with some lemmas which are useful for the analysis of coming theorems.

    To simplify our analysis, we use the following notations throughout this section.

    Ω=Rn×n×Ω1×Ω2×Rn×n×Rn×n;V=(YZMN);W=(XV);
    G(W)=(F(X)MNMNXYXZ);Q=(αIn0In00αIn0InIn01αIn00In01αIn).

    Lemma 4.1. Assume that (X,Y,Z) is a solution of problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk} and {˜Wk} are generated by Algorithm 3.2, then we have

    VkV,Q(Vk˜Vk)Vk˜Vk,Q(Vk˜Vk). (4.1)

    Proof. By (3.6)–(3.10) and Lemma 2.1, we have, for any (X,Y,Z,M,N)Ω, that

    (X˜XkY˜YkZ˜ZkM˜MkN˜Nk),(F(˜Xk)˜Mk˜Nk˜Mk˜Nk˜Xk˜Yk˜Xk˜Zk)+(0000αIn0In00αIn0InIn01αIn00In01αIn)(˜YkYk˜ZkZk˜MkMk˜NkNk)0, (4.2)

    or compactly,

    W˜Wk,G(˜Wk)+(0Q)(˜VkVk)0. (4.3)

    Choosing W as W=(XV), then (4.3) can be rewritten as

    ˜VkV,Q(Vk˜Vk)˜WkW,G(˜Wk).

    Noting that the monotonicity of the gradients of the convex functions, we have by Lemma 2.2 that

    ˜WkW,G(˜Wk)˜WkW,G(W)0.

    Therefore, the above two inequalities imply that

    ˜VkV,Q(Vk˜Vk)0,

    from which the assertion (4.1) is immediately derived.

    Noting that the matrix Q is a symmetric and positive semi-definite matrix, we use, for convenience, the notation

    Vk˜VkQ:=Vk˜Vk,Q(Vk˜Vk).

    Then, the assertion (4.1) can be rewritten as

    VkV,Q(Vk˜Vk)Vk˜Vk2Q. (4.4)

    Lemma 4.2. Assume that (X,Y,Z) is a solution of the constrained optimization problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk},{˜Vk} are generated by Algorithm 3.2. Then, we have

    Vk+1V2QVkV2Qγ(2γ)Vk˜Vk2Q. (4.5)

    Proof. By elementary manipulation, we obtain

    Vk+1V2Q=(VkV)γ(Vk˜Vk)2Q=VkV2Q2γVkV,Q(Vk˜Vk)+γ2Vk˜Vk2QVkV2Q2γVk˜Vk2Q+γ2Vk˜Vk2Q=VkV2Qγ(2γ)Vk˜Vk2Q,

    where the inequality follows from (4.1) and (4.4).

    Lemma 4.3. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 satisfy

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q (4.6)

    for any (X,Y,Z,M,N)Ω.

    Proof. By (4.2) or its compact form (4.3), we have, for any (X,Y,Z,M,N)Ω, that

    W˜Wk,G(˜Wk)W˜Wk,(0Q)(˜VkVk)=V˜Vk,Q(Vk˜Vk). (4.7)

    Thus, it suffices to show that

    V˜Vk,Q(Vk˜Vk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q. (4.8)

    By using the formula 2a,Qb=a2Q+b2Qab2Q, we derive that

    VVk+1,Q(VkVk+1)=12VVk+12Q+12VkVk+12Q12VVk2Q. (4.9)

    Moreover, since (3.11)–(3.14) can be rewritten as (VkVk+1)=γ(Vk˜Vk), we have

    VVk+1,Q(Vk˜Vk)=1γVVk+1,Q(VkVk+1). (4.10)

    Combining (4.9) and (4.10), we obtain

    VVk+1,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q. (4.11)

    On the other hand, we have by using (3.11)–(3.14) that

    Vk+1˜Vk,Q(Vk˜Vk)=(1γ)Vk˜Vk2Q. (4.12)

    By adding (4.11) and (4.12), and again using the fact that (VkVk+1)=γ(Vk˜Vk), we obtain that

    V˜Vk,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q+(1γ)Vk˜Vk2Q=12γ(VVk+12QVVk2Q)+(1γ2)Vk˜Vk2Q

    which is equivalent to (4.8). Hence, the lemma is proved.

    Theorem 4.1. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 are bounded, and furthermore, any accumulation point ˜X of the sequence {˜Xk} is a solution of problem (1.2).

    Proof. The inequality (4.5) with the restriction γ(0,2) implies that

    (ⅰ) limkVk˜VkQ=0;

    (ⅱ) VkVQ is bounded upper.

    Recall that the matrix Q is symmetric and positive semi-definite. Thus, we have by the assertion (ⅰ) that Q(Vk˜Vk)=0 (k) which, together with (3.7) and (3.8), imply that ˜Yk=˜Xk=˜Zk (k). The assertion (ⅱ) implies that the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded. Equations (3.11)–(3.14) hold, and the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded imply the sequence {˜Yk}, {˜Zk}, {˜Mk} and {˜Nk} are also bounded. Hence, by the clustering theorem and together with (3.11)–(3.14), there exist subsequences {˜Xk}K, {˜Yk}K, {˜Zk}K, {˜Mk}K, {˜Nk}K, {Yk}K, {Zk}K, {Mk}K and {Nk}K such that

    limk,kK˜Xk=˜X,limk,kK˜Yk=limk,kKYk=˜Y,limk,kK˜Zk=limk,kKZk=˜Z,
    limk,kK˜Mk=limk,kKMk=˜M,limk,kK˜Nk=limk,kKNk=˜N.

    Furthermore, we have by (3.7) and (3.8) that

    ˜X=˜Y=˜Z. (4.13)

    By (3.6)–(3.8), we have

    AT(A˜Xk+˜XkBC)+(A˜Xk+˜XkBC)BT˜Mk˜Nk=0.

    So we have

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜M˜N=0. (4.14)

    Let k,kK, we have by (3.9), (3.10) and (4.13) that

    ˜X=PΩ1(˜X˜M/α),˜X=PΩ2(˜X˜N/α). (4.15)

    Noting that α>0, we have by (4.15), Lemma 2.3 and Lemma 2.4 that

    ˜M+,˜XL=0,˜XL0, (4.16)
    (˜M)+,U˜X=0,U˜X0, (4.17)

    and

    ˜N+˜NT,˜XεI=0,˜N+˜NTSRn×n0,˜XεISRn×n0. (4.18)

    Let

    ˜Λ1=(˜M)+,˜Λ2=(˜M)+,˜Λ3=˜N,

    we have by (4.14) and (4.16)–(4.18) that

    {AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0˜Λ1,˜XL=0,˜XL0,˜Λ10,˜Λ2,U˜X=0,U˜X0,˜Λ20,˜Λ3+˜ΛT3,˜XεI=0,˜XεISRn×n0,˜Λ3+˜ΛT3SRn×n0. (4.19)

    Hence, we have by Theorem 2.1 that ˜X is a solution of problem (1.2).

    Theorem 4.2. Let the sequences {˜Wk} be generated by Algorithm 3.2. For an integer t>0, let

    ˜Wt=1t+1tk=0˜Wk, (4.20)

    then 1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω and the inequality

    ˜WtW,G(W)12γ(t+1)VV02Q (4.21)

    holds for any (X,Y,Z,M,N)Ω.

    Proof. First, for any integer t>0, we have (˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω for k=0,1,2,,t. Since 1t+1tk=0˜Wk can be viewed as a convex combination of ˜Wks, we obtain

    1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω.

    Second, since γ(0,2), it follows from Lemma 4.3 that

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)0,(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω. (4.22)

    By combining the monotonicity of G(W) with the inequality (4.22), we have

    W˜Wk,G(W)+12γ(VVk2QVVk+12Q)0,(X,Y,Z,M,N)Ω.

    Summing the above inequality over k=0,1,,t, we derive that

    (t+1)Wtk=0˜Wk,G(W)+12γ(VV02QVVt+12Q)0,(X,Y,Z,M,N)Ω.

    By dropping the minus term, we have

    (t+1)Wtk=0˜Wk,G(W)+12γVV02Q0,(X,Y,Z,M,N)Ω.

    which is equivalent to

    1t+1tk=0˜WkW,G(W)12γ(t+1)VV02Q,(X,Y,Z,M,N)Ω.

    The proof is completed.

    Noting that problem (3.1) is equivalent to find (X,Y,Z,M,N)Ω such that the following inequality

    WW,G(W)0 (4.23)

    holds for any (X,Y,Z,M,N)Ω. Theorem 4.2 means that, for any initial matrices Y0,Z0,M0,N0Rn×n, the point ˜Wt defined in (4.20) satisfies

    ˜WtW,G(W)VV02Q2γ(t+1),

    which means the point ˜Wt is an approximate solution of (4.23) with the accuracy O(1/t). That is, the convergence rate O(1/t) of the Algorithm 3.2 is established in an ergodic sense.

    In this section, we present some numerical examples to illustrate the convergence of MSADMM to the constrained least squares problem (1.2). All tests were performed by Matlab 7 with 64-bit Windows 7 operating system. In all tests, the constant ε=0.1, matrices L with all elements are 1 and U with all elements are 3. The matrices A,B and C are randomly generated, i.e., generated in Matlab style as A=randn(n,n), B=randn(n,n), C=randn(n,n). In all algorithms, the initial matrices are chosen as the null matrices. The maximum number of inner iterations and out iterations are restricted to 5000. The error tolerance εout=εin=109 in Algorithms 3.1 and 3.2. The computational methods of the projection PΩi(X)(i=1,2) are as follows[38].

    PΩ1(X)={Xij,ifLijXijUijUij,ifXij>UijLij,ifXij<Lij,PΩ2(X)=Wdiag(d1,d2,,dn)WT

    where

    di={λi(X+XT2),ifλi(X+XT2)εε,ifλi(X+XT2)<ε

    and W is such that X+XT2=WΔWT is spectral decomposition, i.e., WTW=I and Δ=diag(λ1(X+XT2),λ2(X+XT2),,λn(X+XT2)). We use LSQR algorithm described in [34] with necessary modifications to solve the subproblems (3.5) in Algorithm 3.1, (3.6) in Algorithm 3.2 and (6.1) in Algorithm 6.2.

    The LSQR algorithm is an effective method to solve consistent linear matrix equation or least square problem of inconsistent linear matrix equation. Using this iterative algorithm, for any initial matrix, a solution can be obtained within finite iteration steps if exact arithmetic is used. In addition, using this iterative algorithm, a solution with minimum Frobenius norm can be obtained by choosing a special kind of initial matrix, and a solution which is nearest to given matrix in Frobenius norm can be obtained by first finding minimum Frobenius norm solution of a new consistent matrix equation. The LSQR algorithm to solve the subproblems (3.5), (3.6) and (6.1) can be described as follows:

    Algorithm 5.1. LSQR algorithm to solve subproblems (3.5) and (3.6).
    Step 1. Input matrices A,B,C,Yk,Zk,Mk and Nk, penalty parameter α>0 and error tolerance εin. Compute
          η1=(C2+αYk+Mkα2+αZk+Nkα2)1/2,U(1)1=Cη1,U(2)1=αYk+Mkαη1,U(3)1=αZk+Nkαη1,ξ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1,Γ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1ξ1,Φ1=Γ1,ˉϕ=η1,ˉρ1=ξ1.Leti=:1;
    Step 2. Compute
          ηi+1=(AΓi+ΓiBξiU(1)i2+αΓiξiU(2)i2+αΓiξiU(3)i2)1/2,U(1)i+1=AΓi+ΓiBξiU(1)iηi+1,U(2)i+1=αΓiξiU(2)iηi+1,U(3)i+1=αΓiξiU(3)iηi+1,ξi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γi,Γi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γiξi+1,ρi=(ˉρ2i+η2i+1)1/2,ci=ˉρiρi,si=ηi+1ρi,θi+1=siξi+1,ˉρi+1=ciξi+1,ϕi=ciˉϕi,ˉϕi+1=siˉϕi,Xi+1=Xi+ϕiρiΦi,Φi+1=Γi+1θi+1ρiΦi;
    Step 3. If Xi+1Xi<εin, terminate the execution of the algorithm. (In this case, Xi is a solution of problem (3.5) or (3.6));
    Step 4. Let i=:i+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Table 1 reports the average computing time (CPU) of 10 tests of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with penalty parameter α=n. Figures 14 report the computing time of ADMM with the same size of problem and different penalty parameters α. Figures 58 report the computing time of MSADMM with the same size of problem and different correction factor γ. Figure 9 reports the computing time curve of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with different matrix size.

    Table 1.  Numerical comparisons between MSADMM and ADMM.
    α= n ADMM MSADMM(γ=0.8) MSADMM(γ=1.0) MSADMM(γ=1.5)
    20 0.0846 0.1254 0.0865 0.0538
    40 0.3935 0.4883 0.3836 0.2676
    80 1.3370 1.6930 1.3726 0.8965
    100 2.4766 3.1514 2.5015 1.6488
    150 5.8780 7.3482 5.8742 3.9154
    200 11.6398 14.6023 11.6162 7.6576
    300 35.1929 41.4151 32.9488 21.4898
    400 83.7386 108.9807 86.0472 56.8144
    500 147.5759 183.4758 144.2038 92.3137
    600 242.0408 302.6395 241.6225 157.8042

     | Show Table
    DownLoad: CSV
    Figure 1.  Computing times (seconds) vs. the values of α for n = 40.
    Figure 2.  Computing times (seconds) vs. the values of α for n = 60.
    Figure 3.  Computing times (seconds) vs. the values of α for n = 80.
    Figure 4.  Computing times (seconds) vs. the values of α for n = 100.
    Figure 5.  Computing times (seconds) vs. the values of γ for α = n = 40.
    Figure 6.  Computing times (seconds) vs. the values of γ for α = n = 60.
    Figure 7.  Computing times (seconds) vs. the values of γ for α = n = 80.
    Figure 8.  Computing times (seconds) vs. the values of γ for α = n = 100.
    Figure 9.  Numerical comparisons between ADMM and MSADMM.

    Based on the tests reported in Table 1, Figures 19 and many other performed unreported tests which show similar patterns, we have the following results:

    Remark 5.1. The convergence speed of ADMM is directly related to the penalty parameter α. In general, the penalty parameter α in this paper can be chosen as αn (see Figures 14). However, how to select the best penalty parameter α is an important problem should be studied future time.

    Remark 5.2. The convergence speed of MSADMM is direct relation to the penalty parameter α and the correction factor γ. The selection of the penalty parameter α is similar to ADMM since MSADMM is a direct extension of ADMM. For the correction factor γ, as showed in Table 1 and Figures 58, aggressive values such as γ1.5 are often preferred. However, how to select the best correction factor γ is also an important problem should be studied future time.

    Remark 5.3. As showed in Table 1 and Figure 9, MSADMM, with the correction factor γ1.5 and the penalty parameter α be chosen as the same as ADMM, is more effective than ADMM.

    Anderson acceleration, or Anderson mixing, was initially developed in 1965 by Donald Anderson [35] as an iterative procedure to solve some nonlinear integral equations arising in physics. It turns out that the Anderson acceleration is very efficient to solve other types of nonlinear equations as well, see [36,37,38], and the literature cited therein. When Anderson acceleration is applied to the equation f(x)=g(x)x=0, the iterative pattern can be described as the following Algorithm 6.1.

    Algorithm 6.1. Anderson accelerated method to solve the equation f(x)=0.
    Given x0Rn and an integer m1 this algorithm produces a sequence xk of iterates intended to converge to a fixed point of the function g:RnRn
    Step 1. Compute x1=g(x0);
    Step 2. For k=1,2, until convergence;
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
          minλRmkf(xk)mkj=1λj(f(xkmk+j)f(xkmk+j1))22;
    Step 5. Set
          xk+1=g(xk)+mk1j=1λj[g(xkmk+j+1)g(xkmk+j)].

     | Show Table
    DownLoad: CSV

    In this we define the following matrix functions

    f(Y,Z,M,N)=g(Y,Z,M,N)(Y,Z,M,N),

    where g(Yk,Zk,Mk,Nk)=(Yk+1,Zk+1,Mk+1,Nk+1) with Yk+1,Zk+1,Mk+1 and Nk+1 are computed by (b)–(e) in Algorithm 3.1, and let fk=f(Yk,Zk,Mk,Nk),gk=g(Yk,Zk,Mk,Nk), then Algorithm 3.1 with Anderson acceleration can be described as the following Algorithm 6.2.

    Algorithm 6.2. Algorithm 3.1 with Anderson acceleration to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and integer m1. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk);(6.1)
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
           minλRmkfkmkj=1λj(fkmk+j+1fkmk+j)2;
    Step 5. Set
           (Yk+1,Zk+1,Mk+1,Nk+1)=gk+mk1j=1λj(gkmk+j+1gkmkj);
    Step 6. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 7. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Algorithm 6.2 (ACADMM) is m-step iterative method, that is, the current iterates is obtained by the linear combination of the previous m steps. Furthermore, the combination coefficients of the linear combination are modified at each iteration steps. Compared to ACADMM, Algorithm 3.2 (MSADMM) is two-step iterative method and the combination coefficients of the linear combination are fixed at each iteration steps. The convergence speed of ACADMM is directly related to the penalty parameter α and the backtracking step m. The selection of the penalty parameter α is the same as ADMM since ACADMM's iterates are corrected by ADMM's iterates. For the backtracking step m, as showed in Table 2 (the average computing time of 10 tests) and Figure 10, aggressive values such as m=10 are often preferred (in this case, ACADMM is more efficient than MSADMM). However, how to select the best backtracking step m is an important problem which should be studied in near future.

    Table 2.  Numerical comparisons between MSADMM and ACADMM.
    α= n MSADMM(γ=1.5) ACADMM(m=2) ACADMM(m=10) ACADMM(m=20)
    20 0.0735 0.0991 0.0521 0.0647
    40 0.2595 0.3485 0.2186 0.2660
    80 1.1866 1.1319 1.0232 1.1069
    100 1.7750 2.2081 1.6003 1.7660
    150 3.8474 5.2760 3.5700 3.9587
    200 7.6133 10.8719 6.9807 7.7318
    300 21.2970 28.5379 18.8233 19.9526
    400 56.2133 74.3192 44.8087 46.5275
    500 98.0542 130.2326 75.6044 80.1480
    600 157.7573 208.1124 125.2842 133.4549

     | Show Table
    DownLoad: CSV
    Figure 10.  Numerical comparisons between MSADMM and ACADMM.

    In this paper, the multiple constraint least squares solution of the Sylvester equation AX+XB=C is discussed. The necessary and sufficient conditions for the existence of solutions to the considered problem are given (Theorem 2.1). MSADMM to solve the considered problem is proposed and some convergence results of the proposed algorithm are proved (Theorem 4.1 and Theorem 4.2). Problems which should be studied in the near future are listed. Numerical experiments show that MSADMM with a suitable correction factor γ is more effective than ADMM (See Table 1 and Figure 10), and ACADMM with a suitable backtracdking step m is the most effective of ADMM, MSADMM and ACADMM (See Table 2 and Figure 10).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by National Natural Science Foundation of China (grant number 11961012) and Special Research Project for Guangxi Young Innovative Talents (grant number AD20297063).

    The authors declare no competing interests.



    [1] T. Abdeljawad, R. P. Agarwal, E. Karapınar, P. S. Kumari, Solutions of the nonlinear integral equation and fractional differential equation using the technique of a fixed point with a numerical experiment in extended b-metric space, Symmetry, 11 (2019), 686. https://doi.org/10.3390/sym11050686 doi: 10.3390/sym11050686
    [2] N. A. Assad, W. A. Kirk, Fixed point theorems for set valued mappings of contractive type, Pac. J. Math., 43 (1972), 533–562. https://doi.org/10.2140/pjm.1972.43.553 doi: 10.2140/pjm.1972.43.553
    [3] A. Azam, M. Arshad, Fixed points of a sequence of locally contractive multivalued maps, Comput. Math. Appl., 57 (2009), 96–100. https://doi.org/10.1016/j.camwa.2008.09.039 doi: 10.1016/j.camwa.2008.09.039
    [4] I. Beg, A. R. Butt, Fixed point of set-valued graph contractive mappings, J. Ineq. Appl., 2013,252. https://doi.org/10.1186/1029-242X-2013-252
    [5] M. Berinde, V. Berinde, On a general class of multi-valued weakly Picard mappings, J. Math. Anal. Appl., 326 (2007), 772–782. https://doi.org/10.1016/j.jmaa.2006.03.016 doi: 10.1016/j.jmaa.2006.03.016
    [6] M. Edelstein, An extension of Banach's contraction principle, Proc. Amer. Math. Soc., 12 (1961), 7–10. https://doi.org/10.1090/S0002-9939-1961-0120625-6 doi: 10.1090/S0002-9939-1961-0120625-6
    [7] T. Hu, Fixed point theorems for multi-valued mappings, Canadian Math. Bul., 23 (1980), 193–197. https://doi.org/10.4153/CMB-1980-026-2 doi: 10.4153/CMB-1980-026-2
    [8] J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Amer. Math. Soc., 136 (2008), 1359–1373. https://doi.org/10.1090/S0002-9939-07-09110-1 doi: 10.1090/S0002-9939-07-09110-1
    [9] R. Johnsonbaugh, Discrete Mathematics; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1997.
    [10] M. M. Khater, M. S. Mohamed, R. A. Attia, On semi analytical and numerical simulations for a mathematical biological model; the time-fractional nonlinear Kolmogorov–Petrovskii–Piskunov (KPP) equation, Chaos. Soliton. Fract., 144 (2021), 110676. https://doi.org/10.1016/j.chaos.2021.110676 doi: 10.1016/j.chaos.2021.110676
    [11] M. M. Khater, A. A. Mousa, M. A. El-Shorbagy, R. A. Attia, Analytical and semi-analytical solutions for Phi-four equation through three recent schemes, Results Phys., 22 (2021), 103954. https://doi.org/10.1016/j.rinp.2021.103954 doi: 10.1016/j.rinp.2021.103954
    [12] M. M. Khater, S. A. Salama, Plenty of analytical and semi-analytical wave solutions of shallow water beneath gravity, J. Ocean Eng. Sci., 7 (2022), 237–243. https://doi.org/10.1016/j.joes.2021.08.004 doi: 10.1016/j.joes.2021.08.004
    [13] M. M. Khater, A. M. Alabdali, Multiple novels and accurate traveling wave and numerical solutions of the (2+1) dimensional Fisher-Kolmogorov-Petrovskii-Piskunov equation, Mathematics, 9 (2021), 1440. https://doi.org/10.3390/math9121440 doi: 10.3390/math9121440
    [14] M. M. Khater, D. Lu, Analytical versus numerical solutions of the nonlinear fractional time–space telegraph equation, Mod. Phys. Lett. B, 35 (2021), 2150324. https://doi.org/10.1142/S0217984921503243 doi: 10.1142/S0217984921503243
    [15] W. A. Kirk, P. S. Srinivasan, P. Veeranmani, Fixed points for mappings obeying cyclical contractive condition, Fixed Point Theory, 41 (2003), 79–89.
    [16] N. Mizoguchi, W. Takahashi, Fixed point theorems for multivalued mappings on complete metric spaces, J. Math. Anal. Appl., 141 (1989), 177–188. https://doi.org/10.1002/mana.19891410119 doi: 10.1002/mana.19891410119
    [17] N. A. K. Muhammad, A. Akbar, M. Nayyar, Coincidence points of a sequence of multivalued mappings in metric space with a graph, Mathematics, 5 (2017), 30. https://doi.org/10.3390/math5020030 doi: 10.3390/math5020030
    [18] B. Nadler, Multivalued contraction mappings, Pac. J. Math., 30 (1969), 475–488. https://doi.org/10.2140/pjm.1969.30.475 doi: 10.2140/pjm.1969.30.475
    [19] M. Pacurar, I. A. Rus, Fixed point theory for cyclic φ-contractions, Nonlinear. Anal.-Theor., 72 (2010), 1181–1187. https://doi.org/10.1016/j.na.2009.08.002 doi: 10.1016/j.na.2009.08.002
    [20] S. K. Panda, T. Abdeljawad, C. Ravichandran, A complex valued approach to the solutions of Riemann-Liouville integral, Atangana-Baleanu integral operator and non-linear Telegraph equation via fixed point method, Chaos, Soliton. Fract., 130 (2020), 109439. https://doi.org/10.1016/j.chaos.2019.109439 doi: 10.1016/j.chaos.2019.109439
    [21] S. K. Panda, T. Abdeljawad, C. Ravichandran, Novel fixed point approach to Atangana-Baleanu fractional and Lp-Fredholm integral equations, Alex. Eng. J., 59 (2020), 1959–1970. https://doi.org/10.1016/j.aej.2019.12.027 doi: 10.1016/j.aej.2019.12.027
    [22] S. K. Panda, E. Karapınar, A. Atangana, A numerical schemes and comparisons for fixed point results with applications to the solutions of Volterra integral equations in dislocatedextendedb-metricspace, Alex. Eng. J., 59 (2020), 815–827. https://doi.org/10.1016/j.aej.2020.02.007 doi: 10.1016/j.aej.2020.02.007
    [23] J. J. Nieto, R. Rodríguez-López, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order, 3 (2005), 223–239. https://doi.org/10.1007/s11083-005-9018-5 doi: 10.1007/s11083-005-9018-5
    [24] S. Phikul, S. Suthep, Common fixed point theorems for multivalued weak contractive mappings in metric spaces with graphs, Filomat, 32 (2018), 671–680. https://doi.org/10.2298/FIL1802671S doi: 10.2298/FIL1802671S
    [25] A. C. M. Ran, M. C. B. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Amer. Math. Soc., 3 (2004), 1435–1443. https://doi.org/10.1090/S0002-9939-03-07220-4 doi: 10.1090/S0002-9939-03-07220-4
    [26] S. Reich, Fixed points of contractive functions, Boll. Unione. Mat. Ital., 5 (1972), 26–42.
    [27] I. A. Rus, Fixed point theorems for multivalued mappings in complete metric spaces, Math. Japonica, 20 (1975), 21–24.
    [28] M. Younis, D. Singh, S. Radenović, M. Imdad, Convergence theorems for generalized contractions and applications, Filomat, 34 (2020), 945–964. https://doi.org/10.2298/FIL2003945Y doi: 10.2298/FIL2003945Y
  • This article has been cited by:

    1. Shougui Zhang, Xiyong Cui, Guihua Xiong, Ruisheng Ran, An Optimal ADMM for Unilateral Obstacle Problems, 2024, 12, 2227-7390, 1901, 10.3390/math12121901
    2. Mohammad Khorsand Zak, Abbas Abbaszadeh Shahri, A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation, 2025, 13, 2227-7390, 318, 10.3390/math13020318
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1555) PDF downloads(61) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog