Research article

A generalized iterative scheme with computational results concerning the systems of linear equations

  • Received: 29 August 2022 Revised: 14 December 2022 Accepted: 20 December 2022 Published: 04 January 2023
  • MSC : 65F10, 90C30

  • In this article, a new generalized iterative technique is presented for finding the approximate solution of a system of linear equations Ax=b. The efficiency of iterative technique is analyzed by implementing it on some examples, and then comparing with existing methods. A parameter introduced in the method plays very vital role for a better and rapid solution. Convergence analysis is also examined. Findings of this paper may stimulate further research in this area.

    Citation: Kamsing Nonlaopon, Farooq Ahmed Shah, Khaleel Ahmed, Ghulam Farid. A generalized iterative scheme with computational results concerning the systems of linear equations[J]. AIMS Mathematics, 2023, 8(3): 6504-6519. doi: 10.3934/math.2023328

    Related Papers:

    [1] Rashid Ali, Ilyas Khan, Asad Ali, Abdullah Mohamed . Two new generalized iteration methods for solving absolute value equations using $ M $-matrix. AIMS Mathematics, 2022, 7(5): 8176-8187. doi: 10.3934/math.2022455
    [2] Fan Sha, Jianbing Zhang . Randomized symmetric Gauss-Seidel method for solving linear least squares problems. AIMS Mathematics, 2024, 9(7): 17453-17463. doi: 10.3934/math.2024848
    [3] Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071
    [4] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [5] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [6] Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243
    [7] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong, Bruno Carpentieri . Highly efficient family of two-step simultaneous method for all polynomial roots. AIMS Mathematics, 2024, 9(1): 1755-1771. doi: 10.3934/math.2024085
    [8] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
    [9] Adisorn Kittisopaporn, Pattrawut Chansangiam . Approximate solutions of the $ 2 $D space-time fractional diffusion equation via a gradient-descent iterative algorithm with Grünwald-Letnikov approximation. AIMS Mathematics, 2022, 7(5): 8471-8490. doi: 10.3934/math.2022472
    [10] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
  • In this article, a new generalized iterative technique is presented for finding the approximate solution of a system of linear equations Ax=b. The efficiency of iterative technique is analyzed by implementing it on some examples, and then comparing with existing methods. A parameter introduced in the method plays very vital role for a better and rapid solution. Convergence analysis is also examined. Findings of this paper may stimulate further research in this area.



    Consider the general frame work of system of linear equations

    Ax=b, (1.1)

    where ARn×n is the coefficients matrix, bRn is a constant vector and xRn is an unknown vector. Various problems arising in different fields such as computer science, electrical engineering, mechanical engineering and economics are modeled in this general frame work of system of linear equations (1.1).

    Importance of the methods for systems of linear equations can not be denied due to the requirement of solutions of systems occurring in almost all fields. Babylonians first introduced the system of linear equations with two unknowns about 4000 years ago. Later on Cramer [1] gave the idea for solving the systems of linear equations by using determinants. In the nineteenth century, Gauss introduced a method to solve the linear system (1.1) by elimination of variables one by one and later on using backward substitutions. There also exist many other methods in the literature to solve (1.1). Usually, these methods are classified into two categories, called direct and iterative methods.

    The objective of a direct method is to get an exact solution in minimal number of operations. While, an iterative method starts with an initial guess and produces an infinite sequence of approximations in the direction of exact solution. This sequence can be limited by using a suitable stopping criteria. Direct methods involve the Gauss elimination method, Gauss-Jordan elimination method, Cholasky method, LU decomposition method [2]. Large and sparsely populated systems often arise in solving partial differential equations numerically or dealing with optimization problems. For such cases the conjugate gradient method is implemented and also suggested for sparse systems [3]. Direct methods are ineffective for a system consisting on a large number of equations, mostly when the coefficient matrix is sparse.

    Iterative methods consist on successive approximations that are used to gain approximate solution for system (1.1) at each step, starting with a given initial approximation. Moreover, iterative methods can be further categorized into stationary and non-stationary methods. Stationary methods are older and more straightforward methods involving an iteration matrix that remains constant throughout the whole iterations during calculation. Examples of stationary iterative methods are the Jacobi method, Gauss-Seidel method, Successive Over Relaxation method [2]. The computations in non-stationary methods involve information that changes at each iteration. These iterative methods are used to derive the inner products of residuals [2].

    We observe that in any iterative method the system may be represented in the form of x=Px+c, and the iterative scheme x(k+1)=Px(k)+c is suggested by using an initial approximation x(0) to obtain the best approximate solution. The iterative method is convergent if and only if ρ(P)<1, where ρ(P) is spectral radius of P. In order to obtain the iterative scheme we partition A=(aij) as A=DLU, where D=diag(aii), L and U are strictly, lower and upper triangular matrices respectively.

    Jacobi method and Gauss-Seidel method are the classical methods which are used for the diagonally dominant systems by spiting the coefficient matrix into three matrices. In Jacobi method, the iterative scheme [2] can be expressed as:

    x(k)=D1(L+U)x(k1)+D1b, (1.2)

    and similarly, for Gauss-Seidel method [2], the iterative scheme is suggested as:

    x(k)=(DL)1Ux(k1)+(DL)1b. (1.3)

    If the coefficient matrix A is strictly diagonally dominant, the Jacobi and Gauss-Seidel methods converge for any x0. However Gauss-Seidel method converges rapidly as compare to Jacobi method [2,4].

    The Successive Over-Relaxation (SOR) techniques

    x(k)=(DwL)1((1w)D+wU)x(k1)+w(DwL)1b, (1.4)

    are nicely addressed in literature [2,5,6]. Requirement for the parameter w for SOR is that it lies between zero and two and for each particular matrix the optimal value of w is discussed very comprehensively [7].

    In 1978, the accelerated over-relaxation (AOR) method was initially presented by Hadjidimos as a modification of the successive over-relaxation (SOR) method with two parameters [8]. In mostly cases, the AOR technique improves the Jacobi, Gauss-Seidel, and SOR methods [8,9,10,11]. Significance of AOR method can be seen in [9,12,13,14]. For the convergence of AOR method sufficient conditions are discussed [15,16,17,18,19]. Various aspects of applications of AOR method can also be studied in [21,22,23]. We also see in literature the preconditioned AOR technique to improve the convergence rate of AOR method [24,25,26,27,28,29]. While Krylov subspace techniques [3,30,31,32] are recognized as one of the most significant and effective iterative approaches to solve the sparse linear systems because they are inexpensive to be implemented and are able to fully exploit the sparsity of the coefficient matrix. Krylov subspace techniques are extremely slow or fail to converge when the coefficient matrix of the system is ill-conditioned and excessively indefinite which is the drawback of these schemes.

    The purpose of this paper is to present a new iterative method for solving the systems of linear equations (1.1), which is the generalization of existing methods and fast convergent than the Jacobi, Gauss-Seidel, SOR, and AOR methods. In Section 2, generalized iterative scheme is developed for the best approximate solution. In Section 3, convergence of the proposed iterative scheme is discussed. Numerical and graphical results are discussed in Section 4.

    In this section, we construct a generalized iterative scheme for solving the system of linear equations (1.1). Jacobi method, Gauss-Seidel method, SOR method, and AOR method are the special cases for this presented scheme.

    System (1.1) can be written as:

    wAx=wb, (2.1)

    where 0<w<2 and

    w(DLU)x=bw. (2.2)

    We split matrix A as sum of three matrices D,L and U. Here, D is a diagonal matrix, L is the strictly lower triangular matrix, and U is the strictly upper triangular matrix.

    Above Eq (2.2) can be re-written as:

    (DrLtU)x=[(1w)D+(wr)L+(wt)U]x+bw. (2.3)

    Now (2.3) can be expressed as:

    x=(DrLtU)1[(1w)D+(wr)L+(wt)U]x+(DrLtU)1bw, (2.4)

    where 0<t<w<r<2.

    Relation (2.4) is a fixed point formulation which allows us to suggest the following iterative scheme.

    Algorithm 2.1. For a given initial vector x(0), find the approximate solution x(k) from the following iterative scheme:

    x(k)=(DrLtU)1[(1w)D+(wr)L+(wt)U]x(k1)+(DrLtU)1bw,k=1,2,3,...

    Algorithm 2.1 is the main iterative scheme that converges to the solution rapidly as compared with other methods. This is the generalized scheme for obtaining the solution of a system of linear equations. We present some special cases.

    If t=0, Algorithm 2.1 reduces to the following iterative scheme.

    Algorithm 2.2. For a given initial vector x(0), find the approximate solution x(k) from the following technique:

    x(k)=(DrL)1[(1w)D+(wr)L+wU]x(k1)+(DrL)1bw,k=1,2,3,...

    which is well-known AOR method [2,3].

    If t=0 and w=r, the Algorithm 2.1 reduces to the following SOR method [2,3].

    Algorithm 2.3. For a given initial vector x(0), find the approximate solution x(k) from the following technique:

    x(k)=(DwL)1[(1w)D+wU]x(k1)+(DwL)1bw,k=1,2,3,...

    If t=0 and w=r=1, the Algorithm 2.1 reduces to the following scheme.

    Algorithm 2.4. For a given initial vector x(0), find the approximate solution x(k) from the following technique:

    x(k)=(DL)1Ux(k1)+(DL)1b,k=1,2,3,...

    Algorithm 2.4 is Gauss-Seidel method [2,3].

    If t=r=0 and w=1, the Algorithm 2.1 reduces to the following scheme.

    Algorithm 2.5. For a given initial vector x(0), find the approximate solution x(k) from the following technique:

    x(k)=D1(L+U)x(k1)+D1b,k=1,2,3,...

    Algorithm 2.5 is well-known Jacobi method [2,3].

    In this section, we consider the convergence analysis of the newly developed iterative scheme mentioned as Algorithm 2.1.

    x(k)=(DrLtU)1[(1w)D+(wr)L+(wt)U]x(k1)+(DrLtU)1bw.

    Lemma 3.1. [2] If the spectral radius satisfies

    ρ[(DrLtU)1(1w)D+(wr)L+(wt)U)]1,

    then

    [I(DrLtU)1((1w)D+(wr)L+(wt)U)]1

    exists and

    [I(DrLtU)1((1w)D+(wr)L+(wt)U)]1=I+[(DrLtU)1((1w)D+(wr)L+(wt)U)]+[(DrLtU)1((1w)D+(wr)L+(wt)U)]2+=j=0[(DrLtU)1((1w)D+(wr)L+(wt)U)]j. (3.1)

    Theorem 3.2. For a given any x(0)Rn, the sequence {x(k)}k=0 defined by

    x(k)=[(DrLtU)1((1w)D+(wr)L+(wt)U)]x(k1)+(DrLtU)1bw,

    for each k1, converges to the unique solution

    x=[(DrLtU)1((1w)D+(wr)L+(wt)U)]x+(DrLtU)1bw,

    if and only if

    ρ[(DrLtU)1((1w)D+(wr)L+(wt)U)]<1.

    Proof. For the proof of the statement, it is enough to show that spectral radius of iteration matrix <1. For this, let us consider the iterative scheme suggested in Algorithm 2.1.

    x(k)=[(DrLtU)1((1w)D+(wr)L+(wt)U)]x(k1)+(DrLtU)1bw,

    which can be rewritten as:

    x(k)=[(DrLtU)1((1w)D+(wr)L+(wt)U)][((DrLtU)1((1w)D+(wr)L+(wt)U))x(k2)+(DrLtU)1bw]+(DrLtU)1bw=[(DrLtU)1((1w)D+(wr)L+(wt)U)]kx(0)+[[(DrLtU)1((1w)D+(wr)L+(wt)U)]x(k1)++[(DrLtU)1((1w)D+(wr)L+(wt)U)]+I](DrLtU)1bw. (3.2)

    Since

    ρ([(DrLtU)1((1w)D+(wr)L+(wt)U)])1,

    the matrix converges and

    limk[(DrLtU)1((1w)D+(wr)L+(wt)U)]kx(0)=0,

    and Lemma 3.1 implies that

    limkx(k)=0+limk[k1j=0[(DrLtU)1((1w)D+(wr)L+(wt)U)]j](DrLtU)1bw=[I[(DrLtU)1((1w)D+(wr)L+(wt)U)]]1(DrLtU)1bw.

    As a result, the sequence x(k) converges to the vector

    x=[I(DrLtU)1((1w)D+(wr)L+(wt)U)]1(DrLtU)1bw.

    We can also view the convergence criteria of the purposed method as an application of the Banach fixed point theorem [33]. System of linear equations can be described with the relations of parameters in the equation form as:

    {x1=(1wa11)x1(w2t)a12x2(w2t)a1nxn+wb1x2=(w2r)a21x1+(1wa22)x2(w2t)a2nxn+wb2xn=(w2r)an1x1(w2r)an2x2+(1wann)xn+wbn. (3.3)

    This system is equivalent to

    x=cx+d (3.4)

    with d=wb and cij={(1waij)ifi=j(w2t)aijifi<j(w2r)aijifi>j.

    The solution can be obtained by

    x(k+1)=cx(k)+d. (3.5)

    The iteration method is defined by

    xj(k+1)=1cjj(γnk=1,kjcjkx(k)). (3.6)

    Assuming that cjj0 for j=1,...n. This iteration is suggested for the jth equation of the system. It is not difficult to verify that (3.6) can be written in the form of

    c=(DrLtU)1[(1w)D+(wr)L+(wt)U], (3.7)

    and

    d=(DrLtU)1wb. (3.8)

    Here D = diag(cjj) is the diagonal matrix whose non-zero elements are of those of the principle diagonal of A. Condition of diagonally dominant applied to c is sufficient for the convergence of Algorithm 2.1. We can express directly in terms of the elements of A. The result is the row sum criteria for the convergence will be

    nk=1,kj|ajkajj|<1, (3.9)

    or

    nk=1,kj|ajk|<|ajj|. (3.10)

    This shows that convergence is guaranteed, if the elements in principle diagonal of A are sufficiently large.

    Note that all the components of a new approximation are introduced simultaneously at the end of an iteration cycle.

    In this section, we provide few numerical applications to clarify the efficiency of new developed three parameter iterative scheme Algorithm 2.1, on some system of linear equations for 0<t<w<r<2, whose coefficient matrices satisfy

    max1in1ui=α and max1in1li=β,α+β1,

    where

    li=maxi1j=1βij, for i=2,3,,n,

    and

    ui=maxnj=i+1αij, for i=1,2,,n1.

    In this part, we will compare our developed scheme with previous techniques as namely AOR method, SOR method, Jacobi method and Gauss-Seidel method. All computations are calculated by using computer programming by MATLAB. We use ε=1015 and the following stopping criteria is used for computer programs as:

    ||x(k)x(k1)||||x(k)||ε.

    This stopping criteria is deduced from relative error and the infinite sequence generated by the computer code will be chopped at the stage when this criteria is satisfied. We assume the following examples to compare the new developed method Algorithm 2.1 (Alg 2.1) with various iterative methods AOR (Alg 2.2), SOR (Alg 2.3), Gauss-Seidel (Alg 2.4) and Jacobi (Alg 2.5), to analyze the new iterative scheme's feasibility and effectiveness.

    For the numerical and graphical comparison of methods, we select some examples from the literature.

    Example 4.1. [3] We consider a problem where the loop-current approach is combined with Ohm's law and Kirchhoff's voltage law. Each loop in the network is supposed to be circulated by a loop current. Thus, the loop current I1 cycles the closed-loops a,b,c, and d in the network shown in Figure 1. As a result, the current I1I2 passes via the link joining b and c.

    Figure 1.  Network of loop-current.

    From the above network as shown in Figure 1, we get a four-variable linear equations system by letting R1=R4=1Ω,R2=2Ω,R3=4Ω and V=5volts. We get the following system of the form

    4I12I2=5,2I1+6I22I3=0,2I2+6I32I4=0,2I3+8I4=0.

    Table 1 displays the numerical results for Example 4.1, which indicate that Alg 2.1 is more efficient than the other methods.

    Table 1.  Tabular comparison.
    Methods Parameters Iterations Relative error
    Alg 2.1 w=1.02,r=1.05,t=0.88 14 8.9966e18
    Alg 2.2 w=1.02,r=1.05 28 2.8789e16
    Alg 2.3 w=1.02 30 2.8789e16
    Alg 2.4 ... 32 4.3184e16
    Alg 2.5 ... 61 7.1973e16

     | Show Table
    DownLoad: CSV

    In Figure 2 the residual fall of different methods shows that new method is faster convergent than the other methods. Figure 3 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.

    Figure 2.  Log of residual.
    Figure 3.  Comparison of iterations.

    Example 4.2. [34] Consider the following system of the form

    x1+0.250x2=0.75,0.250x1+x2+0.250x3=1.50,0.250x2+x3+0.250x4=1.50,0.250x3+x4+0.250x5=1.50,0.250x4+x5=1.25.

    Table 2 displays the numerical results which indicate that Alg 2.1 is more efficient than the other techniques.

    Table 2.  Tabular comparison.
    Methods Parameters Iterations Relative error
    Alg 2.1 w=1.01,r=1.06,t=0.86 13 5.8249e16
    Alg 2.2 w=1.01,r=1.06 19 4.8541e16
    Alg 2.3 w=1.01 23 2.4271e16
    Alg 2.4 ... 24 2.4271e16
    Alg 2.5 ... 44 7.7666e16

     | Show Table
    DownLoad: CSV

    The residual fall of different technique can be seen in Figure 4 which illustrate that the new method is rapidly convergent than the other methods. Figure 5 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.

    Figure 4.  Log of residual.
    Figure 5.  Comparison of iterations.

    Example 4.3. [2] Consider the following system of linear equations of the form

    4x1x2x3=1,x1+4x2x4=1,x1+4x3x4=1,x2x3+4x4=1.

    Table 3 displays the numerical results for Example 4.5, which indicate that Alg 2.1 is more efficient than the other methods.

    Table 3.  Tabular comparison.
    Methods Parameters Iterations Relative error
    Alg 2.1 w=1.05;r=1.07;t=0.9 15 5.5511e16
    Alg 2.2 w=1.05;r=1.07 19 9.9920e16
    Alg 2.3 w=1.05 22 5.5511e16
    Alg 2.4 .... 28 4.4409e16
    Alg 2.5 .... 51 8.8818e16

     | Show Table
    DownLoad: CSV

    In Figure 6 the residual fall of different methods shows that New method is faster convergent than the other methods. Figure 7 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.

    Figure 6.  Log of residual.
    Figure 7.  Comparison of iterations.

    Example 4.4. [35] Let the matrix A be given by

    ai,j={8,ifj=i;1,if{j=i+1,fori=1,2,,n1;j=i1,fori=2,3,,n;0,otherwise.

    Let b=(6,5,5,,5,6)T, we take n=100.

    Table 4 displays the numerical results for Example 4.4, which indicate that Alg 2.1 is more efficient than the other techniques.

    Table 4.  Tabular comparison.
    Methods Parameters Iterations Relative error
    Alg 2.1 w=1.02;r=0.97;t=0.50 15 3.8978e16
    Alg 2.2 w=1.02;r=0.97 19 7.7956e16
    Alg 2.3 w=1.02 19 3.8978e16
    Alg 2.4 .... 20 5.1970e16
    Alg 2.5 .... 27 6.4963e16

     | Show Table
    DownLoad: CSV

    The residual fall of different methods can be seen in Figure 8 which illustrate that the new method is rapidly convergent than the other methods. Figure 9 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.

    Figure 8.  Log of residual.
    Figure 9.  Comparison of iterations.

    Example 4.5. [2,36] Consider the system (1.1), having co-efficient matrix A is given by

    aij={2i,ifi=jandi=1,2,,1000;1,if{j=i+1,fori=1,2,,999;j=i1,fori=2,3,,1000;0,otherwise.

    and bi=1.5i6 for each i=1,2,,1000.

    Table 5 shows the numerical results for Example 4.3, which indicate that Alg 2.1 is much more efficient than the other techniques.

    Table 5.  Tabular comparison.
    Methods Parameters Iterations Relative error
    Alg 2.1 w=1.021;r=1.079;t=0.98 13 3.6092e17
    Alg 2.2 w=1.021;r=1.079 18 4.3310e16
    Alg 2.3 w=1.0219 20 2.8873e16
    Alg 2.4 .... 22 5.7747e16
    Alg 2.5 .... 41 5.7747e16

     | Show Table
    DownLoad: CSV

    The residual fall of different techniques can be seen in Figure 10 which illustrate that the new method is rapidly convergent than the other methods. Figure 11 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.

    Figure 10.  Log of residual.
    Figure 11.  Comparison of iterations.

    In Table 6, IT stands for the number of iterations in above tabular comparison which shows that our new iterative method work much effectively.

    Table 6.  Comparison table for Algorithm 2.1 with various combinations of parameters.
    Parameters Example 4.1 Example 4.2 Example 4.3 Example 4.4 Example 4.5
    w r t IT IT IT IT IT
    0.2 0.7 0.9 186 160 181 160 169
    0.4 0.5 0.8 102 82 96 77 86
    0.6 0.5 0.8 63 50 58 46 52
    0.3 0.8 0.5 138 111 132 108 120
    0.2 0.8 0.3 229 186 217 174 195
    0.3 0.8 1.2 96 97 97 97 95
    0.3 0.8 0.2 155 124 145 114 129
    0.5 0.8 0.3 85 67 79 61 70
    0.5 0.8 0.5 77 61 73 59 66
    0.8 0.4 0.4 57 40 50 33 42
    0.8 0.5 0.7 46 34 41 30 36
    0.9 0.5 0.8 36 27 32 23 28
    0.9 1.04 0.5 24 21 22 21 21
    1.02 1.08 0.8 15 14 14 12 14
    1.03 1.09 0.9 14 13 14 12 13

     | Show Table
    DownLoad: CSV

    In this article, a new generalized iterative scheme is suggested for solving systems of linear equations. We have studied the convergence criteria of this iterative scheme. This scheme is not only the generalized one but also give good results as compared to the existing schemes. This iterative scheme is also suitable for sparse matrices. Numerical results show that this scheme is more effective than the conventional schemes. We would also like to purpose that the given scheme can be extended for the absolute value problems of the type Ax+B|x|=b.

    All authors declare no conflicts of interest in this paper.

    The authors would like to thank the editor and the anonymous reviewers for their constructive comments and suggestions, which improved the quality of this paper.



    [1] G. Cramer, Introduction a l'analyse des lignes courbes algébriques, A Geneva: Fréres Cramer and Cl. Philibert, 1730.
    [2] R. L. Burden, J. D. Faires, Numerical analysis, Boston: PWS, 1980.
    [3] Y. Saad, Iterative methods for sparse linear systems, SIAM, 2003. https://doi.org/10.1137/1.9780898718003
    [4] D. K. Salkuyeh, Generalized Jacobi and Gauss-Seidel methods for solving linear system of equations, Numer. Math. J. Chin. Univ., 16 (2007), 164–170.
    [5] R. S. Varga, Iterative analysis, Berlin: Springer, 1962.
    [6] D. M. Young, Iterative Solution of Large Linear Systems, Elsevier, 2014.
    [7] C. E. Froberg, Numerical Mathematics: Theory and computer applications, Basic Books, 1985.
    [8] A. Hadjidimos, Accelerated overrelaxation method, Math. Comput., 32 (1978), 149–157. http://doi.org/10.2307/2006264 doi: 10.2307/2006264
    [9] G. Avdelas, A. Hadjidimos, A. Yeyios, Some theoretical and computational results concerning the accelerated overrelaxation (AOR) method, Math. Rev. Anal. Numér. Théor. Approximation, 9 (1980), 5–10.
    [10] A. I. Faruk, A. Ndanusa, Improvements of successive overrelaxation iterative (SOR) method for L-matrices, SJBAS, 1 (2020), 218–223.
    [11] Z. Mayaki, A. Ndanusa, Modified successive overrelaxation (SOR) type methods for M-matrices, Sci. World J., 14 (2019), 1–5.
    [12] K. Audu, Y. Yahaya, K. Adeboye, U. Abubakar, A. Ndanusa, Triple accelerated over-relaxation method for system of linear equations, Int. J. Math. Educ. Sci. Technol., 16 (2020), 137–146.
    [13] Z. Z. Bai, The monotone convergence rate of the parallel nonlinear AOR method, Comput. Math. Appl., 31 (1996), 1–8. https://doi.org/10.1016/0898-1221(96)00013-2 doi: 10.1016/0898-1221(96)00013-2
    [14] Z. Z. Bai, Asynchronous multisplitting AOR methods for a class of systems of weakly nonlinear equations, Appl. Math. Comput., 98 (1999), 49–59. https://doi.org/10.1016/S0096-3003(97)10154-0 doi: 10.1016/S0096-3003(97)10154-0
    [15] R. Ali, I. Khan, A. Ali, A. Mohamed, Two new generalized iteration methods for solving absolute value equations using m-matrix, AIMS Mathematics, 7 (2022), 8176–8187. https://doi.org/10.3934/math.2022455 doi: 10.3934/math.2022455
    [16] L. Cvetkovic, V. Kostic, A note on the convergence of the AOR method, Appl. Math. Comput., 194 (2007), 394–399. https://doi.org/10.1016/j.amc.2007.04.030 doi: 10.1016/j.amc.2007.04.030
    [17] M. Fallah, S. Edalatpanah, On the some new preconditioned generalized AOR methods for solving weighted linear least squares problems, IEEE, 8 (2020), 33196–33201. https://doi.org/10.1007/s40314-016-0350-8 doi: 10.1007/s40314-016-0350-8
    [18] Z. X. Gao, T. Z. Huang, Convergence of AOR method, Appl. Math. Comput., 176 (2006), 134–140. https://doi.org/10.1016/j.amc.2005.09.020
    [19] F. Hailu, G. G. Gonfa, H. M. Chemeda, Second degree generalized successive over relaxation method for solving system of linear equations, MEJS, 2 (2020), 60–71. https://doi.org/10.4314/mejs.v12i1.4 doi: 10.4314/mejs.v12i1.4
    [20] V. Kumar Vatti, G. Chinna Rao, S. S. Pai, Parametric Accelerated Over Relaxation (PAOR) method, Adv. Intell. Syst. Comput., 979 (2020), 283–288. https://doi.org/10.1007/978-981-15-3215-3-27 doi: 10.1007/978-981-15-3215-3-27
    [21] W. Li, W. Sun, Comparison results for parallel multisplitting methods with applications to AOR methods, Linear Algebra Appl., 331 (2001), 131–144. https://doi.org/10.1016/S0024-3795(01)00276-2 doi: 10.1016/S0024-3795(01)00276-2
    [22] A. Yeyios, A necessary condition for the convergence of the accelerated overrelaxation (AOR) method, J. Comput. Appl. Math., 26 (1989), 371–373. https://doi.org/10.1016/0377-0427(89)90309-9 doi: 10.1016/0377-0427(89)90309-9
    [23] J. Y. Yuan, X. Q. Jin, Convergence of the generalized AOR method, Appl. Math. Comput., 99 (1999), 35–46. https://doi.org/10.1016/S0096-3003(97)10175-8 doi: 10.1016/S0096-3003(97)10175-8
    [24] Y. T. Li, C. X. Li, S. L. Wu, Improvements of preconditioned AOR iterative method for L-matrices, J. Comput. Appl. Math., 206 (2007), 656–665. https://doi.org/10.1016/j.cam.2006.08.019 doi: 10.1016/j.cam.2006.08.019
    [25] Y. T. Li, C. X. Li, S. L. Wu, Improving AOR method for consistent linear systems, Appl. Math. Comput., 186 (2007), 379–388. https://doi.org/10.1016/j.amc.2006.07.097 doi: 10.1016/j.amc.2006.07.097
    [26] Z. Q. Wang, Optimization of the parameterized Uzawa preconditioners for saddle point matrices, J. Comput. Appl. Math., 226 (2009), 136–154. https://doi.org/10.1016/j.cam.2008.05.019 doi: 10.1016/j.cam.2008.05.019
    [27] M. Wu, L. Wang, Y. Song, Preconditioned AOR iterative method for linear systems, Appl. Numer. Math., 57 (2007), 672–685. https://doi.org/10.1016/j.apnum.2006.07.029 doi: 10.1016/j.apnum.2006.07.029
    [28] S. Wu, T. Huang, A modified AOR-type iterative method for L-matrix linear systems, ANZIAM, 49 (2007), 281–292. https://doi.org/10.1017/S1446181100012840 doi: 10.1017/S1446181100012840
    [29] J. H. Yun, Comparison results of the preconditioned AOR methods for L-matrices, Appl. Math. Comput., 218 (2011), 3399–3413. https://doi.org/10.1016/j.amc.2011.08.085 doi: 10.1016/j.amc.2011.08.085
    [30] J. W. Pearson, J. Pestana, Preconditioners for Krylov subspace methods: An overview, GAMM-Mitt., 43 (2020), e202000015. https://doi.org/10.1002/gamm.202000015 doi: 10.1002/gamm.202000015
    [31] Z. Z. Bai, Sharp error bounds of some Krylov subspace methods for non-Hermitian linear systems, Appl. Math. Comput., 109 (2000), 273–285.
    [32] R. Kehl, R. Nabben, D. B. Szyld, Adaptive multilevel Krylov methods, Electron. Trans. Numer. Anal., 51 (2019).
    [33] E. Kreyszig, Introductory Functional analysis with applications, Wiley, 1991.
    [34] M. Darivishi, The best values of parameters in accelerated successive overrelaxation methods, WSEAS Trans. Math., 3 (2004), 505–510.
    [35] M. A. Noor, J. Iqbal, K. I. Noor, E. Al-Said, On an iterative method for solving absolute value equations, Optim. Lett., 6 (2012), 1027–1033. https://doi.org/10.1007/s11590-011-0332-0 doi: 10.1007/s11590-011-0332-0
    [36] M. A. Noor, K. I. Noor, M. Waseem, A new decomposition technique for solving a system of linear equations, J. Assoc. Arab Univ. Basic Appl. Sci., 16 (2014), 27–33. http://doi.org/10.1016/j.jaubas.2013.07.001 doi: 10.1016/j.jaubas.2013.07.001
  • This article has been cited by:

    1. Abdullah Mohammed Alomair, Farooq Ahmed Shah, Khaleel Ahmed, Muhammad Waseem, Generalized and novel iterative scheme for best approximate solution of large and sparse augmented linear systems, 2024, 10, 24058440, e35694, 10.1016/j.heliyon.2024.e35694
    2. Yifang Qin, Shunhua Chen, Mitsuteru Asai, A nodal-based Lagrange multiplier/cohesive zone approach for three-dimensional dynamic crack simulations of quasi-brittle materials, 2023, 292, 00137944, 109637, 10.1016/j.engfracmech.2023.109637
    3. Farooq Ahmed Shah, Muhammad Waseem, Quadrature based innovative techniques concerning nonlinear equations having unknown multiplicity, 2024, 6, 2666657X, 100150, 10.1016/j.exco.2024.100150
    4. Farooq Ahmed Shah, Muhammad Waseem, Alexey Mikhaylov, Gabor Pinter, Modification of Adomian decomposition technique in multiplicative calculus and application for nonlinear equations, 2024, 12, 26668181, 100902, 10.1016/j.padiff.2024.100902
    5. Gülnur Çelik Kızılkan, Büşra Yağlıpınar, Interval Iterative Decreasing Dimension Method for Interval Linear Systems and Its Implementation to Analog Circuits, 2024, 12, 2227-7390, 2655, 10.3390/math12172655
    6. Farooq Ahmed Shah, Alexey Mikhaylov, Ehsan Ul Haq, Numerical Framework for Investigating MHD Heat and Mass Transfer in Nanofluid Flow over 2-D boundary layers in a porous medium : A Variation of Parameters Method Approach, 2024, 25901230, 103547, 10.1016/j.rineng.2024.103547
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1980) PDF downloads(110) Cited by(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog