Processing math: 100%
Research article Special Issues

Multi-fractional-differential operators for a thermo-elastic magnetic response in an unbounded solid with a spherical hole via the DPL model

  • Received: 23 October 2022 Revised: 20 November 2022 Accepted: 20 November 2022 Published: 20 December 2022
  • MSC : 35B40, 35Q79, 35J55, 45F15, 73B30

  • The current research aims to investigate thermodynamic responses to thermal media based on a modified mathematical model in the field of thermoelasticity. In this context, it was considered to present a new model with a fractional time derivative that includes Caputo-Fabrizio and Atangana-Baleanu fractional differential operators within the framework of the two-phase delay model. The proposed mathematical model is employed to examine the problem of an unbounded material with a spherical hole experiencing a reduced moving heat flow on its inner surface. The problem is solved analytically within the modified space utilizing the Laplace transform as the solution mechanism. An arithmetic inversion of the Laplace transform was performed and presented visually and tabularly for the studied distributions. In the tables, specific comparisons are introduced to evaluate the influences of different fractional operators and thermal properties on the response of all the fields examined.

    Citation: Osama Moaaz, Ahmed E. Abouelregal. Multi-fractional-differential operators for a thermo-elastic magnetic response in an unbounded solid with a spherical hole via the DPL model[J]. AIMS Mathematics, 2023, 8(3): 5588-5615. doi: 10.3934/math.2023282

    Related Papers:

    [1] Junyong Zhao . On the number of unit solutions of cubic congruence modulo n. AIMS Mathematics, 2021, 6(12): 13515-13524. doi: 10.3934/math.2021784
    [2] Wafaa Fakieh, Amal Alsaluli, Hanaa Alashwali . Laplacian spectrum of the unit graph associated to the ring of integers modulo pq. AIMS Mathematics, 2024, 9(2): 4098-4108. doi: 10.3934/math.2024200
    [3] Zhiqun Li, Huadong Su . The radius of unit graphs of rings. AIMS Mathematics, 2021, 6(10): 11508-11515. doi: 10.3934/math.2021667
    [4] Songxiao Li, Jizhen Zhou . Essential norm of generalized Hilbert matrix from Bloch type spaces to BMOA and Bloch space. AIMS Mathematics, 2021, 6(4): 3305-3318. doi: 10.3934/math.2021198
    [5] Huadong Su, Zhunti Liang . The diameter of the nil-clean graph of Zn. AIMS Mathematics, 2024, 9(9): 24854-24859. doi: 10.3934/math.20241210
    [6] Shakir Ali, Amal S. Alali, Atif Ahmad Khan, Indah Emilia Wijayanti, Kok Bin Wong . XOR count and block circulant MDS matrices over finite commutative rings. AIMS Mathematics, 2024, 9(11): 30529-30547. doi: 10.3934/math.20241474
    [7] Dan Liu, Jianhua Zhang, Mingliang Song . Local Lie derivations of generalized matrix algebras. AIMS Mathematics, 2023, 8(3): 6900-6912. doi: 10.3934/math.2023349
    [8] Zhao Xiaoqing, Yi Yuan . Square-free numbers in the intersection of Lehmer set and Piatetski-Shapiro sequence. AIMS Mathematics, 2024, 9(12): 33591-33609. doi: 10.3934/math.20241603
    [9] Guangren Sun, Zhengjun Zhao . SLn(Z)-normalizer of a principal congruence subgroup. AIMS Mathematics, 2022, 7(4): 5305-5313. doi: 10.3934/math.2022295
    [10] B. Amutha, R. Perumal . Public key exchange protocols based on tropical lower circulant and anti circulant matrices. AIMS Mathematics, 2023, 8(7): 17307-17334. doi: 10.3934/math.2023885
  • The current research aims to investigate thermodynamic responses to thermal media based on a modified mathematical model in the field of thermoelasticity. In this context, it was considered to present a new model with a fractional time derivative that includes Caputo-Fabrizio and Atangana-Baleanu fractional differential operators within the framework of the two-phase delay model. The proposed mathematical model is employed to examine the problem of an unbounded material with a spherical hole experiencing a reduced moving heat flow on its inner surface. The problem is solved analytically within the modified space utilizing the Laplace transform as the solution mechanism. An arithmetic inversion of the Laplace transform was performed and presented visually and tabularly for the studied distributions. In the tables, specific comparisons are introduced to evaluate the influences of different fractional operators and thermal properties on the response of all the fields examined.



    The Sylvester equation

    AX+XB=C (1.1)

    appears frequently in many areas of applied mathematics. We refer readers to the elegant survey by Bhatia and Rosenthal [1] and the references therein for the history of the Sylvester equation and many interesting and important theoretical results. The Sylvester equation is important in a number of applications such as matrix eigenvalue decompositions [2,3], control theory [3,4,5], model reduction [6,7,8,9], physics mathematics to construct exact solutions of nonlinear integrable equations [10], feature problemss of slice semi-regular functions [11] and the numerical solution of the matrix differential Riccati equations [12,13,14]. There are several numerical algorithms to compute the solution of the Sylvester equation. The standard ones are the Bartels Stewart algorithm [15] and the Hessenberg Schur method first described by Enright [14], but more often attributed to Golub, Nash and Van Loan [16]. Other computationally efficient approaches for the case that both A and B are stable, i.e., both A and B have all their eigenvalues in the open left half plane, are the sign function method [17], Smith method [18] and ADI iteration methods [19,20,21,22]. All these methods are efficient for the small size of the dense matrices A and B.

    The recent interest is directed more towards the large and sparse matrices A and B, and C with low rank. For the dense A and B, the approach based on the sign function method is suggested in [23] that exploits the low rank structure of C. This approach is further used in [24] in order to solve the large scale Sylvester equation with sparse A and B, i.e., the matrices A and B can be represented by O(nlog(n)) data. Problems for the sensitivity of the solution of the Sylvester equation are also widely studied. There are several books that contain the results for these problems [25,26,27].

    In this paper, we focus our attention on the multiple constrained least squares solution of the Sylvester equation, that is, the following multiple constrained least squares problem:

    minXT=X,LXU,λmin(X)ε>0f(X)=12AX+XBC2 (1.2)

    where A,B,C,L and U are given n×n real matrices, X is a n×n real symmetric matrix which we wish to find, λmin(X) represents the smallest eigenvalue of the symmetric matrix X, and ε is a given positive constant. The inequality XY, for any two real matrices, means that XijYij, here Xij and Yij denote the ijth entries of the matrices X and Y, respectively.

    Multiple constrained conditions least squares estimations of matrices are widely used in mathematical economics, statistical data analysis, image reconstruction, recommendation problems and so on. They differ from the ordinary least squares problems, and the estimated matrices are usually required to be symmetric positive definite, bounded and, sometimes, to have some special construction patterns. For example, in the dynamic equilibrium model of economy [28], one needs to estimate an aggregate demand function derived from second order analysis of the utility function of individuals. The formulation of this problem is to find the least squares solution of the matrix equation AX=B, where A and B are given, the fitting matrix X is a symmetric and bounded matrix, and the smallest eigenvalue is no less than a specified positive number since, in the neighborhood of equilibrium, the approximate of the utility function is a quadratic and strictly concave with Hessian matrix. Other examples discussed in [29,30] are respectively to find a symmetric positive definite patterned matrix closest to a sample covariance matrix and to find a symmetric and diagonally dominant matrices with positive diagonal matrix closest to a given matrix. Based on the above analysis, we have a strong motivation to study the multiple constrained least squares problem (1.2).

    In this paper, we first transform the multiple constrained least squares (1.2) into an equivalent constrained optimization problem. Then, we give the necessary and sufficient conditions for the existence of a solution to the equivalent constrained optimization problem. Noting that the alternating direction method of multipliers (ADMM) is one-step iterative method, we propose a multi-step alternating direction method of multipliers (MSADMM) to the multiple constrained least squares (1.2), and analyze the global convergence of the proposed algorithm. We will give some numerical examples to illustrate the effectiveness of the proposed algorithm to the multiple constrained least squares (1.2) and list some problems that should be studied in the near future. We also give some numerical comparisons between MSADMM, ADMM and ADMM with Anderson acceleration (ACADMM).

    Throughout this paper, Rm×n, SRn×n and SRn×n0 denote the set of m×n real matrices, n×n symmetric matrices and n×n symmetric positive semidefinite matrices, respectively. In stands for the n×n identity matrix. A+ denotes a matrix with ijth entry equal to max{0,Aij}. The inner product in space Rm×n defined as A,B=tr(ATB)=ijAijBij for all A,BRm×n, and the associated norm is Frobenius norm denoted by A. PΩ(X) denotes the projection of the matrix X onto the constrained matrix set Ω, that is PΩ(X)=argminZΩZX.

    In this section, we give an existence theorem for a solution of the multiple constrained least squares problem (1.2) and some theoretical results for the optimization problems which are useful for discussions in the next sections.

    Theorem 2.1. The matrix ˜X is a solution of the multiple constrained least squares problem (1.2) if and only if there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that the following conditions (2.1)–(2.4) are satisfied.

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0, (2.1)
    ˜Λ1,˜XL=0,˜XL0,˜Λ10, (2.2)
    ˜Λ2,U˜X=0,U˜X0,˜Λ20, (2.3)
    ˜Λ3+˜ΛT3,˜XεIn=0,˜XεInSRn×n0,˜Λ3+˜ΛT3SRn×n0. (2.4)

    Proof. Obviously, the multiple constrained least squares problem (1.2) can be rewritten as

    minXSRn×nF(X)=12AX+XBC2s.t. XL0,UX0,XεInSRn×n0. (2.5)

    Then, if ˜X is a solution to the constrained optimization problem (2.5), ˜X certainly satisfies KKT conditions of the constrained optimization problem (2.5), and hence of the multiple constrained least squares problem (1.2). That is, there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied.

    Conversely, assume that there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied. Let

    ˉF(X)=F(X)˜Λ1,XL˜Λ2,UX˜Λ3+˜ΛT32,XεIn,

    then, for any matrix WSRn×n, we have

    ˉF(˜X+W)=12A(˜X+W)+(˜X+W)BC2   ˜Λ1,˜X+WL˜Λ2,U˜XW˜Λ3+˜ΛT32,˜X+WεIn=ˉF(˜X)+12AW+WB2+AW+WB,A˜X+˜XBC   ˜Λ1,W+˜Λ2,W˜Λ3+˜ΛT32,W=ˉF(˜X)+12AW+WB2   +W,AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=ˉF(˜X)+12AW+WB2ˉF(˜X).

    This implies that ˜X is a global minimizer of the function ˉF(X) with XSRn×n. Since

    ˜Λ1,˜XL=0,˜Λ2,U˜X=0,˜Λ3+˜ΛT3,˜XεIn=0,

    and ˉF(X)ˉF(˜X) holds for all XSRn×n, we have

    F(X)F(˜X)+˜Λ1,XL+˜Λ2,UX+˜Λ3+˜ΛT32,XεIn.

    Noting that ˜Λ10,˜Λ20 and ˜Λ3+˜ΛT3SRn×n0, then F(X)F(˜X) holds for all X with XL0, UX0 and XεInSRn×n0. Hence, ˜X is a solution of the constrained optimization problem (2.5), that is, ˜X a solution of the multiple constrained least squares problem (1.2).

    Lemma 2.1. [31] Assume that ˜x is a solution of the optimization problem

    minf(x)  s.t.xΩ,

    where f(x) is a continuously differentiable function, Ω is a closed convex set, then

    f(˜x),x˜x0,xΩ.

    Lemma 2.2. [31] Assume that (˜x1,˜x2,...,˜xn) is a solution of the optimization problem

    minni=1fi(xi)  s.t.ni=1Aixi=b,xiΩi,i=1,2,...,n (2.6)

    where fi(xi)(i=1,2,...,n) are continuously differentiable functions, Ωi(i=1,2,...,n) are closed convex sets, then

    xif(˜xi)ATi˜λ,xi˜xi0,xiΩi,i=1,2,...,n,

    where ˜λ is a solution to the dual problem of (2.6).

    Lemma 2.3. Assume that Ω={XRn×n:LXU}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    (M)+,YL=0,(M)+,UY=0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZRn×n,ZL0,UZ0Z(YM)2, (2.7)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.7). That is, there exist matrices ˜Λ10 and ˜Λ20 such that

    ˜ZY+M˜Λ1+˜Λ2=0,˜Λ1,˜ZL=0,˜Λ2,U˜Z=0,˜ZL0,U˜Z0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M˜Λ1+˜Λ2=0,˜Λ1,YL=0,˜Λ2,UY=0,YL0,UY0.

    So we have from the above conditions that (˜Λ1)ij(˜Λ2)ij=0 when LijUij, and (˜Λ1)ij and (˜Λ2)ij can be arbitrarily selected as (˜Λ1)ij0 and (˜Λ2)ij0 when Lij=Uij. Noting that M=˜Λ1˜Λ2, ˜Λ1 and ˜Λ2 can be selected as ˜Λ1=(M)+ and ˜Λ2=(M)+. Hence, the results hold.

    Lemma 2.4. Assume that Ω={XRn×n:XT=X,λmin(X)ε>0}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    M+MT,YεIn=0,M+MTSRn×n0,YεInSRn×n0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZSRn×n,ZεInSRn×n0Z(YM)2, (2.8)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.8). That is, there exists a matrix ˜ΛSRn×n0 such that

    ˜ZY+M˜Λ+(˜ZY+M˜Λ)T=0,˜Λ,˜ZεIn=0,˜ZεInSRn×n0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M+MT2˜Λ=0,˜Λ,YεIn=0,YεInSRn×n0.

    Hence, the results hold.

    In this section we give a multi-step alternating direction method of multipliers (MSADM) tothe multiple constrained least squares problem (1.2). Obviously, the multiple constrained least squares problem (1.2) is equivalent to the following constrained optimization problem

    minXF(X)=12AX+XBC2,s.t. XY=0,XZ=0,      XRn×n,      YΩ1={YRn×n:LYU},      ZΩ2={ZRn×n:ZT=Z,λmin(Z)ε>0}. (3.1)

    The Lagrange function, augmented Lagrangian function and dual problem to the constrained optimization problem (3.1) are, respectively,

    L(X,Y,Z,M,N)=F(X)M,XYN,XZ, (3.2)
    Lα(X,Y,Z,M,N)=F(X)+α2XYM/α2+α2XZN/α2, (3.3)
    maxM,NRn×ninfXRn×n,YΩ1,ZΩ2L(X,Y,Z,M,N), (3.4)

    where M and N are Lagrangian multipliers and α is penalty parameter.

    The alternating direction method of multipliers [32,33] to the constrained optimization problem (3.1) can be described as the following Algorithm 3.1.

    Algorithm 3.1. ADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0 and penalty parameter α>0. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           (a)Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(b)Yk+1=argminYΩ1Lα(Xk+1,Y,Zk,Mk,Nk)=PΩ1(Xk+1Mk/α),(c)Zk+1=argminZΩ2Lα(Xk+1,Yk+1,Z,Mk,Nk)=PΩ2(Xk+1Nk/α),(d)Mk+1=Mkα(Xk+1Yk+1),(e)Nk+1=Nkα(Xk+1Zk+1);(3.5)
    Step 3. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 4. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Alternating direction method of multipliers (ADMM) has been well studied in the context of the linearly constrained convex optimization. In the last few years, we have witnessed a number of novel applications arising from image processing, compressive sensing and statistics, etc. ADMM is a splitting version of the augmented Lagrange method (ALM) where the ALM subproblem is decomposed into multiple subproblems at each iteration, and thus the variables can be solved separably in alternating order. ADMM, in fact, is one-step iterative method, that is, the current iterates is obtained by the information only from the previous step, and the convergence rate of ADMM is only linear, which was proved in [33]. In this paper we propose a multi-step alternating direction method of multipliers (MSADMM), which is more effective than ADMM, to the constrained optimization problem (3.1). The iterative pattern of MSADMM can be described as the following Algorithm 3.2.

    Algorithm 3.2. MSADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and correction factor γ(0,2). Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. ADMM step
          (a)˜Xk=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(3.6)
          (b)˜Mk=Mkα(˜XkYk),(3.7)
          (c)˜Nk=Nkα(˜XkZk),(3.8)
          (d)˜Yk=argminYΩ1Lα(˜Xk,Y,Zk,˜Mk,˜Nk)=PΩ1(˜Xk˜Mk/α),(3.9)
          (e)˜Zk=argminZΩ2Lα(˜Xk,˜Yk,Z,˜Mk,˜Nk)=PΩ2(˜Xk˜Nk/α);(3.10)
    Step 3. Correction step
          (a)Yk+1=Ykγ(Yk˜Yk),    (3.11)
          (b)Zk+1=Zkγ(Zk˜Zk),    (3.12)
          (c)Mk+1=Mkγ(Mk˜Mk).    (3.13)
          (d)Nk+1=Nkγ(Nk˜Nk);    (3.14)
    Step 4. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, ˜Xk is an approximate solution of problem (3.1);
    Step 5. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Compared to ADMM, MSADMM yields the new iterate in the order XMNYZ with the difference in the order of XYZMN. Despite this difference, MSADMM and ADMM are equally effective to exploit the separable structure of (3.1) and equally easy to implement. In fact, the resulting subproblems of these two methods are of the same degree of decomposition and they are of the same difficulty. We shall verify by numerical experiments that these two methods are also equally competitive in numerical senses, and that, if we choose the correction factor γ suitably, MSADMM is more efficient than ADMM.

    In this section, we prove the global convergence and the O(1/t) convergence rate for the proposed Algorithm 3.2. We start the proof with some lemmas which are useful for the analysis of coming theorems.

    To simplify our analysis, we use the following notations throughout this section.

    Ω=Rn×n×Ω1×Ω2×Rn×n×Rn×n;V=(YZMN);W=(XV);
    G(W)=(F(X)MNMNXYXZ);Q=(αIn0In00αIn0InIn01αIn00In01αIn).

    Lemma 4.1. Assume that (X,Y,Z) is a solution of problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk} and {˜Wk} are generated by Algorithm 3.2, then we have

    VkV,Q(Vk˜Vk)Vk˜Vk,Q(Vk˜Vk). (4.1)

    Proof. By (3.6)–(3.10) and Lemma 2.1, we have, for any (X,Y,Z,M,N)Ω, that

    (X˜XkY˜YkZ˜ZkM˜MkN˜Nk),(F(˜Xk)˜Mk˜Nk˜Mk˜Nk˜Xk˜Yk˜Xk˜Zk)+(0000αIn0In00αIn0InIn01αIn00In01αIn)(˜YkYk˜ZkZk˜MkMk˜NkNk)0, (4.2)

    or compactly,

    W˜Wk,G(˜Wk)+(0Q)(˜VkVk)0. (4.3)

    Choosing W as W=(XV), then (4.3) can be rewritten as

    ˜VkV,Q(Vk˜Vk)˜WkW,G(˜Wk).

    Noting that the monotonicity of the gradients of the convex functions, we have by Lemma 2.2 that

    ˜WkW,G(˜Wk)˜WkW,G(W)0.

    Therefore, the above two inequalities imply that

    ˜VkV,Q(Vk˜Vk)0,

    from which the assertion (4.1) is immediately derived.

    Noting that the matrix Q is a symmetric and positive semi-definite matrix, we use, for convenience, the notation

    Vk˜VkQ:=Vk˜Vk,Q(Vk˜Vk).

    Then, the assertion (4.1) can be rewritten as

    VkV,Q(Vk˜Vk)Vk˜Vk2Q. (4.4)

    Lemma 4.2. Assume that (X,Y,Z) is a solution of the constrained optimization problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk},{˜Vk} are generated by Algorithm 3.2. Then, we have

    Vk+1V2QVkV2Qγ(2γ)Vk˜Vk2Q. (4.5)

    Proof. By elementary manipulation, we obtain

    Vk+1V2Q=(VkV)γ(Vk˜Vk)2Q=VkV2Q2γVkV,Q(Vk˜Vk)+γ2Vk˜Vk2QVkV2Q2γVk˜Vk2Q+γ2Vk˜Vk2Q=VkV2Qγ(2γ)Vk˜Vk2Q,

    where the inequality follows from (4.1) and (4.4).

    Lemma 4.3. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 satisfy

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q (4.6)

    for any (X,Y,Z,M,N)Ω.

    Proof. By (4.2) or its compact form (4.3), we have, for any (X,Y,Z,M,N)Ω, that

    W˜Wk,G(˜Wk)W˜Wk,(0Q)(˜VkVk)=V˜Vk,Q(Vk˜Vk). (4.7)

    Thus, it suffices to show that

    V˜Vk,Q(Vk˜Vk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q. (4.8)

    By using the formula 2a,Qb=a2Q+b2Qab2Q, we derive that

    VVk+1,Q(VkVk+1)=12VVk+12Q+12VkVk+12Q12VVk2Q. (4.9)

    Moreover, since (3.11)–(3.14) can be rewritten as (VkVk+1)=γ(Vk˜Vk), we have

    VVk+1,Q(Vk˜Vk)=1γVVk+1,Q(VkVk+1). (4.10)

    Combining (4.9) and (4.10), we obtain

    VVk+1,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q. (4.11)

    On the other hand, we have by using (3.11)–(3.14) that

    Vk+1˜Vk,Q(Vk˜Vk)=(1γ)Vk˜Vk2Q. (4.12)

    By adding (4.11) and (4.12), and again using the fact that (VkVk+1)=γ(Vk˜Vk), we obtain that

    V˜Vk,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q+(1γ)Vk˜Vk2Q=12γ(VVk+12QVVk2Q)+(1γ2)Vk˜Vk2Q

    which is equivalent to (4.8). Hence, the lemma is proved.

    Theorem 4.1. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 are bounded, and furthermore, any accumulation point ˜X of the sequence {˜Xk} is a solution of problem (1.2).

    Proof. The inequality (4.5) with the restriction γ(0,2) implies that

    (ⅰ) limkVk˜VkQ=0;

    (ⅱ) VkVQ is bounded upper.

    Recall that the matrix Q is symmetric and positive semi-definite. Thus, we have by the assertion (ⅰ) that Q(Vk˜Vk)=0 (k) which, together with (3.7) and (3.8), imply that ˜Yk=˜Xk=˜Zk (k). The assertion (ⅱ) implies that the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded. Equations (3.11)–(3.14) hold, and the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded imply the sequence {˜Yk}, {˜Zk}, {˜Mk} and {˜Nk} are also bounded. Hence, by the clustering theorem and together with (3.11)–(3.14), there exist subsequences {˜Xk}K, {˜Yk}K, {˜Zk}K, {˜Mk}K, {˜Nk}K, {Yk}K, {Zk}K, {Mk}K and {Nk}K such that

    limk,kK˜Xk=˜X,limk,kK˜Yk=limk,kKYk=˜Y,limk,kK˜Zk=limk,kKZk=˜Z,
    limk,kK˜Mk=limk,kKMk=˜M,limk,kK˜Nk=limk,kKNk=˜N.

    Furthermore, we have by (3.7) and (3.8) that

    ˜X=˜Y=˜Z. (4.13)

    By (3.6)–(3.8), we have

    AT(A˜Xk+˜XkBC)+(A˜Xk+˜XkBC)BT˜Mk˜Nk=0.

    So we have

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜M˜N=0. (4.14)

    Let k,kK, we have by (3.9), (3.10) and (4.13) that

    ˜X=PΩ1(˜X˜M/α),˜X=PΩ2(˜X˜N/α). (4.15)

    Noting that α>0, we have by (4.15), Lemma 2.3 and Lemma 2.4 that

    ˜M+,˜XL=0,˜XL0, (4.16)
    (˜M)+,U˜X=0,U˜X0, (4.17)

    and

    ˜N+˜NT,˜XεI=0,˜N+˜NTSRn×n0,˜XεISRn×n0. (4.18)

    Let

    ˜Λ1=(˜M)+,˜Λ2=(˜M)+,˜Λ3=˜N,

    we have by (4.14) and (4.16)–(4.18) that

    {AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0˜Λ1,˜XL=0,˜XL0,˜Λ10,˜Λ2,U˜X=0,U˜X0,˜Λ20,˜Λ3+˜ΛT3,˜XεI=0,˜XεISRn×n0,˜Λ3+˜ΛT3SRn×n0. (4.19)

    Hence, we have by Theorem 2.1 that ˜X is a solution of problem (1.2).

    Theorem 4.2. Let the sequences {˜Wk} be generated by Algorithm 3.2. For an integer t>0, let

    ˜Wt=1t+1tk=0˜Wk, (4.20)

    then 1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω and the inequality

    ˜WtW,G(W)12γ(t+1)VV02Q (4.21)

    holds for any (X,Y,Z,M,N)Ω.

    Proof. First, for any integer t>0, we have (˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω for k=0,1,2,,t. Since 1t+1tk=0˜Wk can be viewed as a convex combination of ˜Wks, we obtain

    1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω.

    Second, since γ(0,2), it follows from Lemma 4.3 that

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)0,(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω. (4.22)

    By combining the monotonicity of G(W) with the inequality (4.22), we have

    W˜Wk,G(W)+12γ(VVk2QVVk+12Q)0,(X,Y,Z,M,N)Ω.

    Summing the above inequality over k=0,1,,t, we derive that

    (t+1)Wtk=0˜Wk,G(W)+12γ(VV02QVVt+12Q)0,(X,Y,Z,M,N)Ω.

    By dropping the minus term, we have

    (t+1)Wtk=0˜Wk,G(W)+12γVV02Q0,(X,Y,Z,M,N)Ω.

    which is equivalent to

    1t+1tk=0˜WkW,G(W)12γ(t+1)VV02Q,(X,Y,Z,M,N)Ω.

    The proof is completed.

    Noting that problem (3.1) is equivalent to find (X,Y,Z,M,N)Ω such that the following inequality

    WW,G(W)0 (4.23)

    holds for any (X,Y,Z,M,N)Ω. Theorem 4.2 means that, for any initial matrices Y0,Z0,M0,N0Rn×n, the point ˜Wt defined in (4.20) satisfies

    ˜WtW,G(W)VV02Q2γ(t+1),

    which means the point ˜Wt is an approximate solution of (4.23) with the accuracy O(1/t). That is, the convergence rate O(1/t) of the Algorithm 3.2 is established in an ergodic sense.

    In this section, we present some numerical examples to illustrate the convergence of MSADMM to the constrained least squares problem (1.2). All tests were performed by Matlab 7 with 64-bit Windows 7 operating system. In all tests, the constant ε=0.1, matrices L with all elements are 1 and U with all elements are 3. The matrices A,B and C are randomly generated, i.e., generated in Matlab style as A=randn(n,n), B=randn(n,n), C=randn(n,n). In all algorithms, the initial matrices are chosen as the null matrices. The maximum number of inner iterations and out iterations are restricted to 5000. The error tolerance εout=εin=109 in Algorithms 3.1 and 3.2. The computational methods of the projection PΩi(X)(i=1,2) are as follows[38].

    PΩ1(X)={Xij,ifLijXijUijUij,ifXij>UijLij,ifXij<Lij,PΩ2(X)=Wdiag(d1,d2,,dn)WT

    where

    di={λi(X+XT2),ifλi(X+XT2)εε,ifλi(X+XT2)<ε

    and W is such that X+XT2=WΔWT is spectral decomposition, i.e., WTW=I and Δ=diag(λ1(X+XT2),λ2(X+XT2),,λn(X+XT2)). We use LSQR algorithm described in [34] with necessary modifications to solve the subproblems (3.5) in Algorithm 3.1, (3.6) in Algorithm 3.2 and (6.1) in Algorithm 6.2.

    The LSQR algorithm is an effective method to solve consistent linear matrix equation or least square problem of inconsistent linear matrix equation. Using this iterative algorithm, for any initial matrix, a solution can be obtained within finite iteration steps if exact arithmetic is used. In addition, using this iterative algorithm, a solution with minimum Frobenius norm can be obtained by choosing a special kind of initial matrix, and a solution which is nearest to given matrix in Frobenius norm can be obtained by first finding minimum Frobenius norm solution of a new consistent matrix equation. The LSQR algorithm to solve the subproblems (3.5), (3.6) and (6.1) can be described as follows:

    Algorithm 5.1. LSQR algorithm to solve subproblems (3.5) and (3.6).
    Step 1. Input matrices A,B,C,Yk,Zk,Mk and Nk, penalty parameter α>0 and error tolerance εin. Compute
          η1=(C2+αYk+Mkα2+αZk+Nkα2)1/2,U(1)1=Cη1,U(2)1=αYk+Mkαη1,U(3)1=αZk+Nkαη1,ξ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1,Γ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1ξ1,Φ1=Γ1,ˉϕ=η1,ˉρ1=ξ1.Leti=:1;
    Step 2. Compute
          ηi+1=(AΓi+ΓiBξiU(1)i2+αΓiξiU(2)i2+αΓiξiU(3)i2)1/2,U(1)i+1=AΓi+ΓiBξiU(1)iηi+1,U(2)i+1=αΓiξiU(2)iηi+1,U(3)i+1=αΓiξiU(3)iηi+1,ξi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γi,Γi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γiξi+1,ρi=(ˉρ2i+η2i+1)1/2,ci=ˉρiρi,si=ηi+1ρi,θi+1=siξi+1,ˉρi+1=ciξi+1,ϕi=ciˉϕi,ˉϕi+1=siˉϕi,Xi+1=Xi+ϕiρiΦi,Φi+1=Γi+1θi+1ρiΦi;
    Step 3. If Xi+1Xi<εin, terminate the execution of the algorithm. (In this case, Xi is a solution of problem (3.5) or (3.6));
    Step 4. Let i=:i+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Table 1 reports the average computing time (CPU) of 10 tests of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with penalty parameter α=n. Figures 14 report the computing time of ADMM with the same size of problem and different penalty parameters α. Figures 58 report the computing time of MSADMM with the same size of problem and different correction factor γ. Figure 9 reports the computing time curve of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with different matrix size.

    Table 1.  Numerical comparisons between MSADMM and ADMM.
    α= n ADMM MSADMM(γ=0.8) MSADMM(γ=1.0) MSADMM(γ=1.5)
    20 0.0846 0.1254 0.0865 0.0538
    40 0.3935 0.4883 0.3836 0.2676
    80 1.3370 1.6930 1.3726 0.8965
    100 2.4766 3.1514 2.5015 1.6488
    150 5.8780 7.3482 5.8742 3.9154
    200 11.6398 14.6023 11.6162 7.6576
    300 35.1929 41.4151 32.9488 21.4898
    400 83.7386 108.9807 86.0472 56.8144
    500 147.5759 183.4758 144.2038 92.3137
    600 242.0408 302.6395 241.6225 157.8042

     | Show Table
    DownLoad: CSV
    Figure 1.  Computing times (seconds) vs. the values of α for n = 40.
    Figure 2.  Computing times (seconds) vs. the values of α for n = 60.
    Figure 3.  Computing times (seconds) vs. the values of α for n = 80.
    Figure 4.  Computing times (seconds) vs. the values of α for n = 100.
    Figure 5.  Computing times (seconds) vs. the values of γ for α = n = 40.
    Figure 6.  Computing times (seconds) vs. the values of γ for α = n = 60.
    Figure 7.  Computing times (seconds) vs. the values of γ for α = n = 80.
    Figure 8.  Computing times (seconds) vs. the values of γ for α = n = 100.
    Figure 9.  Numerical comparisons between ADMM and MSADMM.

    Based on the tests reported in Table 1, Figures 19 and many other performed unreported tests which show similar patterns, we have the following results:

    Remark 5.1. The convergence speed of ADMM is directly related to the penalty parameter α. In general, the penalty parameter α in this paper can be chosen as αn (see Figures 14). However, how to select the best penalty parameter α is an important problem should be studied future time.

    Remark 5.2. The convergence speed of MSADMM is direct relation to the penalty parameter α and the correction factor γ. The selection of the penalty parameter α is similar to ADMM since MSADMM is a direct extension of ADMM. For the correction factor γ, as showed in Table 1 and Figures 58, aggressive values such as γ1.5 are often preferred. However, how to select the best correction factor γ is also an important problem should be studied future time.

    Remark 5.3. As showed in Table 1 and Figure 9, MSADMM, with the correction factor γ1.5 and the penalty parameter α be chosen as the same as ADMM, is more effective than ADMM.

    Anderson acceleration, or Anderson mixing, was initially developed in 1965 by Donald Anderson [35] as an iterative procedure to solve some nonlinear integral equations arising in physics. It turns out that the Anderson acceleration is very efficient to solve other types of nonlinear equations as well, see [36,37,38], and the literature cited therein. When Anderson acceleration is applied to the equation f(x)=g(x)x=0, the iterative pattern can be described as the following Algorithm 6.1.

    Algorithm 6.1. Anderson accelerated method to solve the equation f(x)=0.
    Given x0Rn and an integer m1 this algorithm produces a sequence xk of iterates intended to converge to a fixed point of the function g:RnRn
    Step 1. Compute x1=g(x0);
    Step 2. For k=1,2, until convergence;
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
          minλRmkf(xk)mkj=1λj(f(xkmk+j)f(xkmk+j1))22;
    Step 5. Set
          xk+1=g(xk)+mk1j=1λj[g(xkmk+j+1)g(xkmk+j)].

     | Show Table
    DownLoad: CSV

    In this we define the following matrix functions

    f(Y,Z,M,N)=g(Y,Z,M,N)(Y,Z,M,N),

    where g(Yk,Zk,Mk,Nk)=(Yk+1,Zk+1,Mk+1,Nk+1) with Yk+1,Zk+1,Mk+1 and Nk+1 are computed by (b)–(e) in Algorithm 3.1, and let fk=f(Yk,Zk,Mk,Nk),gk=g(Yk,Zk,Mk,Nk), then Algorithm 3.1 with Anderson acceleration can be described as the following Algorithm 6.2.

    Algorithm 6.2. Algorithm 3.1 with Anderson acceleration to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and integer m1. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk);(6.1)
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
           minλRmkfkmkj=1λj(fkmk+j+1fkmk+j)2;
    Step 5. Set
           (Yk+1,Zk+1,Mk+1,Nk+1)=gk+mk1j=1λj(gkmk+j+1gkmkj);
    Step 6. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 7. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Algorithm 6.2 (ACADMM) is m-step iterative method, that is, the current iterates is obtained by the linear combination of the previous m steps. Furthermore, the combination coefficients of the linear combination are modified at each iteration steps. Compared to ACADMM, Algorithm 3.2 (MSADMM) is two-step iterative method and the combination coefficients of the linear combination are fixed at each iteration steps. The convergence speed of ACADMM is directly related to the penalty parameter α and the backtracking step m. The selection of the penalty parameter α is the same as ADMM since ACADMM's iterates are corrected by ADMM's iterates. For the backtracking step m, as showed in Table 2 (the average computing time of 10 tests) and Figure 10, aggressive values such as m=10 are often preferred (in this case, ACADMM is more efficient than MSADMM). However, how to select the best backtracking step m is an important problem which should be studied in near future.

    Table 2.  Numerical comparisons between MSADMM and ACADMM.
    α= n MSADMM(γ=1.5) ACADMM(m=2) ACADMM(m=10) ACADMM(m=20)
    20 0.0735 0.0991 0.0521 0.0647
    40 0.2595 0.3485 0.2186 0.2660
    80 1.1866 1.1319 1.0232 1.1069
    100 1.7750 2.2081 1.6003 1.7660
    150 3.8474 5.2760 3.5700 3.9587
    200 7.6133 10.8719 6.9807 7.7318
    300 21.2970 28.5379 18.8233 19.9526
    400 56.2133 74.3192 44.8087 46.5275
    500 98.0542 130.2326 75.6044 80.1480
    600 157.7573 208.1124 125.2842 133.4549

     | Show Table
    DownLoad: CSV
    Figure 10.  Numerical comparisons between MSADMM and ACADMM.

    In this paper, the multiple constraint least squares solution of the Sylvester equation AX+XB=C is discussed. The necessary and sufficient conditions for the existence of solutions to the considered problem are given (Theorem 2.1). MSADMM to solve the considered problem is proposed and some convergence results of the proposed algorithm are proved (Theorem 4.1 and Theorem 4.2). Problems which should be studied in the near future are listed. Numerical experiments show that MSADMM with a suitable correction factor γ is more effective than ADMM (See Table 1 and Figure 10), and ACADMM with a suitable backtracdking step m is the most effective of ADMM, MSADMM and ACADMM (See Table 2 and Figure 10).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by National Natural Science Foundation of China (grant number 11961012) and Special Research Project for Guangxi Young Innovative Talents (grant number AD20297063).

    The authors declare no competing interests.



    [1] M. Alquran, F. Yousef, F. Alquran, T. A. Sulaiman, A. Yusuf, Dual-wave solutions for the quadratic-cubic conformable-Caputo time-fractional Klein-Fock-Gordon equation, Math. Comput. Simulat., 185 (2021), 62–76. https://doi.org/10.1016/j.matcom.2020.12.014 doi: 10.1016/j.matcom.2020.12.014
    [2] A. Hussanan, M. Z. Ismail, Samiulhaq, I. Khan, S. Sharidan, Radiation effect on unsteady MHD free convection flow in a porous medium with Newtonian heating, Int. J. Appl. Math. Stat., 42 (2013), 474–480.
    [3] A. Khan, K. Ali Abro, A. Tassaddiq, I. Khan, Atangana–Baleanu and Caputo Fabrizio analysis of fractional derivatives for heat and mass transfer of second grade fluids over a vertical plate: a comparative study, Entropy, 19 (2017), 279. https://doi.org/10.3390/e19080279 doi: 10.3390/e19080279
    [4] A. A. Shaikh, S. Qureshi, Comparative analysis of Riemann Liouville, Caputo-Fabrizio, and Atangana-Baleanu integrals, J. Appl. Math. Comput. Mech., 21 (2022), 91–101.
    [5] M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel, Progr. Fract. Differ. Appl., 1 (2015), 1–13.
    [6] A. Atangana, D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: theory and application to heat transfer model, Therm. Sci., 20 (2016), 763–769. https://doi.org/10.48550/arXiv.1602.03408 doi: 10.48550/arXiv.1602.03408
    [7] A. Atangana, D. Baleanu, Caputo-Fabrizio derivative applied to groundwater flow within confined aquifer, J. Eng. Mech., 143 (2016), D4016005.
    [8] A. Atangana, I. Koca, Chaos in a simple nonlinear system with Atangana-Baleanu derivatives with fractional order, Chaos Soliton. Fract., 89 (2016), 447–454. https://doi.org/10.1016/j.chaos.2016.02.012 doi: 10.1016/j.chaos.2016.02.012
    [9] O. J. J. Algahtani, Comparing the Atangana-Baleanu and Caputo-Fabrizio derivative with fractional order: Allen Cahn model, Chaos Soliton. Fract., 89 (2016), 552–559. https://doi.org/10.1016/j.chaos.2016.03.026 doi: 10.1016/j.chaos.2016.03.026
    [10] H. Shatha, Atangana–Baleanu fractional framework of reproducing kernel technique in solving fractional population dynamics system, Chaos Soliton. Fract., 133 (2020), 109624. https://doi.org/10.1016/j.chaos.2020.109624 doi: 10.1016/j.chaos.2020.109624
    [11] A. Atangana, J. F. Gómez-Aguilar, Numerical approximation of Riemann–Liouville definition of fractional derivative: from Riemann–Liouville to Atangana–Baleanu, Numer. Meth. Part. Differ. Equ., 34 (2018), 1502–1523. https://doi.org/10.1002/num.22195 doi: 10.1002/num.22195
    [12] T. Abdeljawad, M. A. Hajji, Q. M. Al-Mdallal, F. Jarad, Analysis of some generalized ABC-fractional logistic models, Alex. Eng. J., 59 (2020), 2141–2148. https://doi.org/10.1016/j.aej.2020.01.030 doi: 10.1016/j.aej.2020.01.030
    [13] H. Abboubakar, P. Kumar, N. A. Rangaig, S. Kumar, A malaria model with Caputo–Fabrizio and Atangana–Baleanu derivatives, Int. J. Model. Simul. Sci., 12 (2021), 2150013. https://doi.org/10.1142/S1793962321500136 doi: 10.1142/S1793962321500136
    [14] T. Sitthiwirattham, R. Gul, K. Shah, I. Mahariq, J. Soontharanon, K. J. Ansari, Study of implicit-impulsive differential equations involving Caputo-Fabrizio fractional derivative, AIMS Math., 7 (2022), 4017–4037. https://doi.org/10.3934/math.2022222 doi: 10.3934/math.2022222
    [15] D. Baleanu, S. S. Sajjadi, A. Jajarmi, Z. Defterli, On a nonlinear dynomical system with both chaotic and nonchaotic behaviors: a new fractional analysis and control, Adv. Differ. Equ., 2021 (2021), 234. https://doi.org/10.1186/s13662-021-03393-x doi: 10.1186/s13662-021-03393-x
    [16] D. Baleanu, S. S. Sajjadi, J. H. Asad, A. Jajarmi, E. Estiri, Hyperchaotic behaviors, optimal control and synchronization of a nonautonomous cardiac conduction system, Adv. Differ. Equ., 2021 (2021), 175. https://doi.org/10.1186/s13662-021-03320-0 doi: 10.1186/s13662-021-03320-0
    [17] D. Baleanu, S. Zibaei, M. Namjoo, A. Jajarmi, A nonstandard finite difference scheme for the modeling and nonidentical synchronization of a noval fractional chaotic system, Adv. Differ. Equ., 2021 (2021), 308. https://doi.org/10.1186/s13662-021-03454-1 doi: 10.1186/s13662-021-03454-1
    [18] R. B. Hetnarski, J. Ignaczak, Generalized thermoelasticity, J. Therm. Stress., 22 (1999), 451–476. https://doi.org/10.1080/014957399280832 doi: 10.1080/014957399280832
    [19] H. W. Lord, Y. Shulman, A generalized dynamical theory of thermoelasticity, J. Mech. Phys. Solids, 15 (1967), 229–309. https://doi.org/10.1016/0022-5096(67)90024-5 doi: 10.1016/0022-5096(67)90024-5
    [20] A. E. Green, K. A. Lindsay, Thermoelasticity, J. Elasticity, 2 (1972), 1–7. https://doi.org/10.1007/BF00045689 doi: 10.1007/BF00045689
    [21] S. Chen, F. Liu, V. Anh, A novel implicit finite difference method for the one-dimensional fractional percolation equation, Numer. Algor., 56 (2011), 517–535. https://doi.org/10.1007/s11075-010-9402-0 doi: 10.1007/s11075-010-9402-0
    [22] R. Metzler, J. Klafter, The random walk's guide to anomalous diffusion: a fractional dynamics approach, Phys. Rep., 339 (2000), 1–77, 2000. https://doi.org/10.1016/S0370-1573(00)00070-3 doi: 10.1016/S0370-1573(00)00070-3
    [23] I. Podlubny, A. Chechkin, T. Skovranek, Y. Chen, B. M. Vinagre Jara, Matrix approach to discrete fractional calculus. Ⅱ. partial fractional differential equations, J. Comput. Phys., 228 (2009), 3137–3153. https://doi.org/10.1016/j.jcp.2009.01.014 doi: 10.1016/j.jcp.2009.01.014
    [24] H. Jafari, A. Golbabai, S. Seifi, K. Sayevand, Homotopy analysis method for solving multi-term linear and nonlinear diffusion-wave equations of fractional order, Comput. Math. Appl., 59 (2010), 1337–1344. https://doi.org/10.1016/j.camwa.2009.06.020 doi: 10.1016/j.camwa.2009.06.020
    [25] S Momani, Z. Odibat, Homotopy perturbation method for nonlinear partial differential equations of fractional order, Phys. Lett. A, 365 (2007), 345–350. https://doi.org/10.1016/j.physleta.2007.01.046 doi: 10.1016/j.physleta.2007.01.046
    [26] J. S. Duan, M. Li, Y. Wang, Y. L. An, Approximate solution of fractional differential equation by quadratic splines, Fractal Fract., 6 (2022), 369. https://doi.org/10.3390/fractalfract6070369 doi: 10.3390/fractalfract6070369
    [27] S. K. Lydia, M. M. Jancirani, A. A. Anitha, Numerical solution of nonlinear fractional differential equations using kharrat-toma iterative method, Nat. Volatiles Essent. Oils, 8 (2021), 9878–9890.
    [28] N. A. Zabidi, Z. A. Majid, A. Kilicman, Z. B. Ibrahim, Numerical solution of fractional differential equations with Caputo derivative by using numerical fractional predict–correct technique, Adv. Cont. Discr. Mod., 2022 (2022), 26. https://doi.org/10.1186/s13662-022-03697-6 doi: 10.1186/s13662-022-03697-6
    [29] H. Wang, F. Wu, D. Lei. A novel numerical approach for solving fractional order differential equations using hybrid functions, AIMS Math., 6 (2021), 5596–5611. https://doi.org/10.3934/math.2021331 doi: 10.3934/math.2021331
    [30] Z. F. Bonab, M. Javidi, Higher order methods for fractional differential equation based on fractional backward differentiation formula of order three, Math. Comput. Simul., 172 (2020), 71–89. https://doi.org/10.1016/j.matcom.2019.12.019 doi: 10.1016/j.matcom.2019.12.019
    [31] A. E. Green, P. M. Naghdi, A Re-examination of the basic postulates of thermomechanics, P. Roy. Soc. A Math. Phy., 432 (1991), 171–194. https://doi.org/10.1098/rspa.1991.0012 doi: 10.1098/rspa.1991.0012
    [32] A. E. Green, P. M. Naghdi, Thermoelasticity without energy dissipation, J. Elasticity, 31 (1993), 189–208. https://doi.org/10.1007/BF00044969 doi: 10.1007/BF00044969
    [33] . E. Green, P. M. Naghdi, On undamped heat waves in an elastic solid, J. Therm. Stresses, 15 (1992), 253–264. https://doi.org/10.1080/01495739208946136 doi: 10.1080/01495739208946136
    [34] D. Y. Tzou, A unified approach for heat conduction from macro- to micro-scales, J. Heat Transfer., 117 (1995), 8–16. https://doi.org/10.1115/1.2822329 doi: 10.1115/1.2822329
    [35] D. Y. Tzou, The generalized lagging response in small-scale and high-rate heating, Int. J. Heat Mass Transf., 38 (1995), 3231–3240. https://doi.org/10.1016/0017-9310(95)00052-B doi: 10.1016/0017-9310(95)00052-B
    [36] D. Y. Tzou, Macro-to microscale heat transfer: the lagging behavior, New York: Taylor & Francis, 1997.
    [37] S. K. Roychoudhuri, On a thermoelastic three-phase-lag model, J. Therm. Stresses, 30 (2007), 231–238. https://doi.org/10.1080/01495730601130919 doi: 10.1080/01495730601130919
    [38] A. E. Abouelregal, On Green and Naghdi thermoelasticity model without energy dissipation with higher order time differential and phase-lags, J. Appl. Comput. Mech., 6 (2020), 445–456.
    [39] A. E. Abouelregal, Two-temperature thermoelastic model without energy dissipation including higher order time-derivatives and two phase-lags, Mater. Res. Express, 6 (2019), 116535. https://doi.org/10.1088/2053-1591/ab447f doi: 10.1088/2053-1591/ab447f
    [40] A. E. Abouelregal, Generalized mathematical novel model of thermoelastic diffusion with four phase lags and higher-order time derivative, Eur. Phys. J. Plus, 135 (2020), 263.
    [41] A. E. Abouelregal, A novel generalized thermoelasticity with higher-order time-derivatives and three-phase lags, Multidiscip. Model. Ma., 16 (2019), 689–711. https://doi.org/10.1108/MMMS-07-2019-0138 doi: 10.1108/MMMS-07-2019-0138
    [42] A. E. Abouelregal, Ö. Civalek, H. F. Oztop, Higher-order time-differential heat transfer model with three-phase lag including memory-dependent derivatives, Int. Commun. Heat Mass, 128 (2021), 105649. https://doi.org/10.1016/j.icheatmasstransfer.2021.105649 doi: 10.1016/j.icheatmasstransfer.2021.105649
    [43] Y. Z. Povstenko, Fractional heat conduction equation and associated thermal stress, J. Therm. Stresses, 28 (2004), 83–102. https://doi.org/10.1080/014957390523741 doi: 10.1080/014957390523741
    [44] Y. Z. Povstenko, Fractional radial heat conduction in an infinite medium with a cylindrical cavity and associated thermal stresses, Mech. Res. Commun., 37 (2010), 436–440. https://doi.org/10.1016/j.mechrescom.2010.04.006 doi: 10.1016/j.mechrescom.2010.04.006
    [45] Y. Povstenko, Non-axisymmetric solutions to time-fractional diffusion-wave equation in an infinite cylinder, Fract. Calc. Appl. Anal., 14 (2011), 418–435. https://doi.org/10.2478/s13540-011-0026-4 doi: 10.2478/s13540-011-0026-4
    [46] Y. Povstenko, T. Kyrylych, Fractional thermoelasticity problem for an infinite solid with a penny-shaped crack under prescribed heat flux across its surfaces, Phil. Trans. R. Soc. A, 378 (2020), 20190289. https://doi.org/10.1098/rsta.2019.0289 doi: 10.1098/rsta.2019.0289
    [47] Y. Povstenko, T. Kyrylych, B. Woźna-Szcześniak, R. Kawa, A. Yatsko, An external circular crack in an infinite solid under axisymmetric heat flux loading in the framework of fractional thermoelasticity, Entropy, 24 (2022), 70. https://doi.org/10.3390/e24010070 doi: 10.3390/e24010070
    [48] Y. Qiao, X. Wang, H. Qi, H. Xu, Numerical simulation and parameters estimation of the time fractional dual-phase-lag heat conduction in femtosecond laser heating, Int. Commun. Heat Mass, 125 (2021), 105355. https://doi.org/10.1016/j.icheatmasstransfer.2021.105355 doi: 10.1016/j.icheatmasstransfer.2021.105355
    [49] Y. J. Yu, L. J. Zhao, Fractional thermoelasticity revisited with new definitions of fractional derivative, Eur. J. Mech. A Solid., 84 (2020), 104043. https://doi.org/10.1016/j.euromechsol.2020.104043 doi: 10.1016/j.euromechsol.2020.104043
    [50] Y. J. Yu, Z. C. Deng, Fractional order theory of Cattaneo-type thermoelasticity using new fractional derivatives, Appl. Math. Model., 87 (2020), 731–751. https://doi.org/10.1016/j.apm.2020.06.023 doi: 10.1016/j.apm.2020.06.023
    [51] Z. Xue, J. Liu, X. Tian, Y. Yu, Thermal shock fracture associated with a unified fractional heat conduction, Eur. J. Mech. A Solids, 85 (2021), 104129. https://doi.org/10.1016/j.euromechsol.2020.104129 doi: 10.1016/j.euromechsol.2020.104129
    [52] Y. Yu, Z. C. Deng, New insights on microscale transient thermoelastic responses for metals with electron-lattice coupling mechanism, Eur. J. Mech. A Solid., 80 (2020), 103887. https://doi.org/10.1016/j.euromechsol.2019.103887 doi: 10.1016/j.euromechsol.2019.103887
    [53] Y. Yu, Z. C. Deng, Fractional order thermoelasticity for piezoelectric materials, Fractals, 29 (2021), 2150082. https://doi.org/10.1142/S0218348X21500821 doi: 10.1142/S0218348X21500821
    [54] C. Li, F. Zeng, Numerical methods for fractional calculus, Boca Raton: CRC Press, 2019.
    [55] A. Atangana, On the new fractional derivative and application to nonlinear Fisher's reaction–diffusion equation, Appl. Math. Comput., 273 (2016), 948–956. https://doi.org/10.1016/j.amc.2015.10.021 doi: 10.1016/j.amc.2015.10.021
    [56] A. E. Abouelregal, M. Alesemi, Vibrational analysis of viscous thin beams stressed by laser mechanical load using a heat transfer model with a fractional Atangana-Baleanu operator, Case Stud. Therm. Eng., 34 (2022), 102028. https://doi.org/10.1016/j.csite.2022.102028 doi: 10.1016/j.csite.2022.102028
    [57] N. Sarkar, A. Lahiri, Eigenvalue approach to two-temperature magneto-thermoelasticity, Vietnam J. Math. Math., 40 (2012), 13–30.
    [58] A. Sur, Nonlocal memory-dependent heat conduction in a magneto-thermoelastic problem, Wave. Random Complex, 32 (2020), 251–271. https://doi.org/10.1080/17455030.2020.1770369 doi: 10.1080/17455030.2020.1770369
    [59] A. E. Aboueregal, H. M. Sedighi, The effect of variable properties and rotation in a visco-thermoelastic orthotropic annular cylinder under the Moore–Gibson–Thompson heat conduction model, P. I. Mech. Eng. L J. Mat., 235 (2021), 1004–1020. https://doi.org/10.1177/14644207209858 doi: 10.1177/14644207209858
    [60] A. E. Abouelregal, R. Alanazi, H. M. Sedighid, Thermal plane waves in unbounded nonlocal medium exposed to a moving heat source with a non-singular kernel and higher order time derivatives, Eng. Anal. Bound. Elem., 140 (2022), 464–475. https://doi.org/10.1016/j.enganabound.2022.04.032 doi: 10.1016/j.enganabound.2022.04.032
    [61] G. Honig, U. Hirdes, A method for the numerical inversion of Laplace transform, J. Comput. Appl. Math., 10 (1984), 113–132. https://doi.org/10.1016/0377-0427(84)90075-X doi: 10.1016/0377-0427(84)90075-X
    [62] H. Dubner, J. Abate, Numerical inversion of Laplace transforms by relating them to the finite Fourier cosine transform, J. Assoc. Comp. Mach., 15 (1968), 115–123. https://doi.org/10.1145/321439.321446 doi: 10.1145/321439.321446
    [63] F. R. De Hoog, J. H. Knight, A. N. Stokes, An improved method for numerical inversion of Laplace transforms, SIAM J. Sci. Statist. Comput., 3 (1982), 357–366. https://doi.org/10.1137/0903022 doi: 10.1137/0903022
    [64] A. E. Abouelregal, H. M. Sedighi, Magneto-thermoelastic behaviour of a finite viscoelastic rotating rod by incorporating Eringen's theory and heat equation including Caputo–Fabrizio fractional derivative, Eng. Comput., 2022. https://doi.org/10.1007/s00366-022-01645-2 doi: 10.1007/s00366-022-01645-2
    [65] T. He, L. Cao, S. Li, Dynamic response of a piezoelectric rod with thermal relaxation, J. Sound Vib., 306 (2007), 897–907. https://doi.org/10.1016/j.jsv.2007.06.018 doi: 10.1016/j.jsv.2007.06.018
    [66] K. Cole, J. Beck, A. Haji-Sheikh, B. Litkouhi, Heat conduction using green's functions, 2nd, New York: Taylor & Francis, 2010.
    [67] Z. B. Hou, R. Komanduri, General solutions for stationary/moving plane heat source problems in manufacturing and tribology, Int. J. Heat Mass, 43 (2000), 1679–1698. https://doi.org/10.1016/S0017-9310(99)00271-9 doi: 10.1016/S0017-9310(99)00271-9
    [68] R. Viskanta, T. L. Bergman, Heat transfer in materials processing, New York: McGraw-Hill, 1998.
    [69] G. Araya, G. Gutierrez, Analytical solution for a transient, three-dimensional temperature distribution due to a moving laser beam, Int. J. Heat Mass, 49 (2006), 4124–4131. https://doi.org/10.1016/j.ijheatmasstransfer.2006.03.026 doi: 10.1016/j.ijheatmasstransfer.2006.03.026
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1859) PDF downloads(86) Cited by(2)

Figures and Tables

Figures(11)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog