
In this article, the finite time anti-synchronization (FTAS) of master-slave 6D Lorenz systems (MS6DLSS) is discussed. Without using previous study methods, by introducing new study methods, namely by adopting the properties of quadratic inequalities of one variable and utilizing the negative definiteness of the quadratic form of the matrix, two criteria on the FTAS are achieved for the discussed MS6DLSS. Up to now, the existing results on FTAS of chaotic systems have been achieved often by adopting the linear matrix inequality (LMI) method and finite time stability theorems (FTST). Adopting the new study methods studies the FTAS of the MS6DLSS, and the novel results on the FTAS are gotten for the MS6DLSS, which is innovative study work.
Citation: Hu Tang, Kaiyu Liu, Zhengqiu Zhang. Finite-time anti-synchronization of a 6D Lorenz systems[J]. AIMS Mathematics, 2024, 9(12): 35931-35948. doi: 10.3934/math.20241703
[1] | Mohamed Obeid, Mohamed A. Abd El Salam, Mohamed S. Mohamed . A novel generalized symmetric spectral Galerkin numerical approach for solving fractional differential equations with singular kernel. AIMS Mathematics, 2023, 8(7): 16724-16747. doi: 10.3934/math.2023855 |
[2] | Imran Talib, Md. Nur Alam, Dumitru Baleanu, Danish Zaidi, Ammarah Marriyam . A new integral operational matrix with applications to multi-order fractional differential equations. AIMS Mathematics, 2021, 6(8): 8742-8771. doi: 10.3934/math.2021508 |
[3] | Chang Phang, Abdulnasir Isah, Yoke Teng Toh . Poly-Genocchi polynomials and its applications. AIMS Mathematics, 2021, 6(8): 8221-8238. doi: 10.3934/math.2021476 |
[4] | Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151 |
[5] | P. Pirmohabbati, A. H. Refahi Sheikhani, H. Saberi Najafi, A. Abdolahzadeh Ziabari . Numerical solution of full fractional Duffing equations with Cubic-Quintic-Heptic nonlinearities. AIMS Mathematics, 2020, 5(2): 1621-1641. doi: 10.3934/math.2020110 |
[6] | Ismail Gad Ameen, Dumitru Baleanu, Hussien Shafei Hussien . Efficient method for solving nonlinear weakly singular kernel fractional integro-differential equations. AIMS Mathematics, 2024, 9(6): 15819-15836. doi: 10.3934/math.2024764 |
[7] | Shazia Sadiq, Mujeeb ur Rehman . Solution of fractional boundary value problems by ψ-shifted operational matrices. AIMS Mathematics, 2022, 7(4): 6669-6693. doi: 10.3934/math.2022372 |
[8] | Fawaz K. Alalhareth, Seham M. Al-Mekhlafi, Ahmed Boudaoui, Noura Laksaci, Mohammed H. Alharbi . Numerical treatment for a novel crossover mathematical model of the COVID-19 epidemic. AIMS Mathematics, 2024, 9(3): 5376-5393. doi: 10.3934/math.2024259 |
[9] | Najat Almutairi, Sayed Saber . Chaos control and numerical solution of time-varying fractional Newton-Leipnik system using fractional Atangana-Baleanu derivatives. AIMS Mathematics, 2023, 8(11): 25863-25887. doi: 10.3934/math.20231319 |
[10] | Samia Bushnaq, Kamal Shah, Sana Tahir, Khursheed J. Ansari, Muhammad Sarwar, Thabet Abdeljawad . Computation of numerical solutions to variable order fractional differential equations by using non-orthogonal basis. AIMS Mathematics, 2022, 7(6): 10917-10938. doi: 10.3934/math.2022610 |
In this article, the finite time anti-synchronization (FTAS) of master-slave 6D Lorenz systems (MS6DLSS) is discussed. Without using previous study methods, by introducing new study methods, namely by adopting the properties of quadratic inequalities of one variable and utilizing the negative definiteness of the quadratic form of the matrix, two criteria on the FTAS are achieved for the discussed MS6DLSS. Up to now, the existing results on FTAS of chaotic systems have been achieved often by adopting the linear matrix inequality (LMI) method and finite time stability theorems (FTST). Adopting the new study methods studies the FTAS of the MS6DLSS, and the novel results on the FTAS are gotten for the MS6DLSS, which is innovative study work.
Consider the general frame work of system of linear equations
Ax=b, | (1.1) |
where A∈Rn×n is the coefficients matrix, b∈Rn is a constant vector and x∈Rn is an unknown vector. Various problems arising in different fields such as computer science, electrical engineering, mechanical engineering and economics are modeled in this general frame work of system of linear equations (1.1).
Importance of the methods for systems of linear equations can not be denied due to the requirement of solutions of systems occurring in almost all fields. Babylonians first introduced the system of linear equations with two unknowns about 4000 years ago. Later on Cramer [1] gave the idea for solving the systems of linear equations by using determinants. In the nineteenth century, Gauss introduced a method to solve the linear system (1.1) by elimination of variables one by one and later on using backward substitutions. There also exist many other methods in the literature to solve (1.1). Usually, these methods are classified into two categories, called direct and iterative methods.
The objective of a direct method is to get an exact solution in minimal number of operations. While, an iterative method starts with an initial guess and produces an infinite sequence of approximations in the direction of exact solution. This sequence can be limited by using a suitable stopping criteria. Direct methods involve the Gauss elimination method, Gauss-Jordan elimination method, Cholasky method, LU decomposition method [2]. Large and sparsely populated systems often arise in solving partial differential equations numerically or dealing with optimization problems. For such cases the conjugate gradient method is implemented and also suggested for sparse systems [3]. Direct methods are ineffective for a system consisting on a large number of equations, mostly when the coefficient matrix is sparse.
Iterative methods consist on successive approximations that are used to gain approximate solution for system (1.1) at each step, starting with a given initial approximation. Moreover, iterative methods can be further categorized into stationary and non-stationary methods. Stationary methods are older and more straightforward methods involving an iteration matrix that remains constant throughout the whole iterations during calculation. Examples of stationary iterative methods are the Jacobi method, Gauss-Seidel method, Successive Over Relaxation method [2]. The computations in non-stationary methods involve information that changes at each iteration. These iterative methods are used to derive the inner products of residuals [2].
We observe that in any iterative method the system may be represented in the form of x=Px+c, and the iterative scheme x(k+1)=Px(k)+c is suggested by using an initial approximation x(0) to obtain the best approximate solution. The iterative method is convergent if and only if ρ(P)<1, where ρ(P) is spectral radius of P. In order to obtain the iterative scheme we partition A=(aij) as A=D−L−U, where D=diag(aii), L and U are strictly, lower and upper triangular matrices respectively.
Jacobi method and Gauss-Seidel method are the classical methods which are used for the diagonally dominant systems by spiting the coefficient matrix into three matrices. In Jacobi method, the iterative scheme [2] can be expressed as:
x(k)=D−1(L+U)x(k−1)+D−1b, | (1.2) |
and similarly, for Gauss-Seidel method [2], the iterative scheme is suggested as:
x(k)=(D−L)−1Ux(k−1)+(D−L)−1b. | (1.3) |
If the coefficient matrix A is strictly diagonally dominant, the Jacobi and Gauss-Seidel methods converge for any x0. However Gauss-Seidel method converges rapidly as compare to Jacobi method [2,4].
The Successive Over-Relaxation (SOR) techniques
x(k)=(D−wL)−1((1−w)D+wU)x(k−1)+w(D−wL)−1b, | (1.4) |
are nicely addressed in literature [2,5,6]. Requirement for the parameter w for SOR is that it lies between zero and two and for each particular matrix the optimal value of w is discussed very comprehensively [7].
In 1978, the accelerated over-relaxation (AOR) method was initially presented by Hadjidimos as a modification of the successive over-relaxation (SOR) method with two parameters [8]. In mostly cases, the AOR technique improves the Jacobi, Gauss-Seidel, and SOR methods [8,9,10,11]. Significance of AOR method can be seen in [9,12,13,14]. For the convergence of AOR method sufficient conditions are discussed [15,16,17,18,19]. Various aspects of applications of AOR method can also be studied in [21,22,23]. We also see in literature the preconditioned AOR technique to improve the convergence rate of AOR method [24,25,26,27,28,29]. While Krylov subspace techniques [3,30,31,32] are recognized as one of the most significant and effective iterative approaches to solve the sparse linear systems because they are inexpensive to be implemented and are able to fully exploit the sparsity of the coefficient matrix. Krylov subspace techniques are extremely slow or fail to converge when the coefficient matrix of the system is ill-conditioned and excessively indefinite which is the drawback of these schemes.
The purpose of this paper is to present a new iterative method for solving the systems of linear equations (1.1), which is the generalization of existing methods and fast convergent than the Jacobi, Gauss-Seidel, SOR, and AOR methods. In Section 2, generalized iterative scheme is developed for the best approximate solution. In Section 3, convergence of the proposed iterative scheme is discussed. Numerical and graphical results are discussed in Section 4.
In this section, we construct a generalized iterative scheme for solving the system of linear equations (1.1). Jacobi method, Gauss-Seidel method, SOR method, and AOR method are the special cases for this presented scheme.
System (1.1) can be written as:
wAx=wb, | (2.1) |
where 0<w<2 and
w(D−L−U)x=bw. | (2.2) |
We split matrix A as sum of three matrices D,L and U. Here, D is a diagonal matrix, L is the strictly lower triangular matrix, and U is the strictly upper triangular matrix.
Above Eq (2.2) can be re-written as:
(D−rL−tU)x=[(1−w)D+(w−r)L+(w−t)U]x+bw. | (2.3) |
Now (2.3) can be expressed as:
x=(D−rL−tU)−1[(1−w)D+(w−r)L+(w−t)U]x+(D−rL−tU)−1bw, | (2.4) |
where 0<t<w<r<2.
Relation (2.4) is a fixed point formulation which allows us to suggest the following iterative scheme.
Algorithm 2.1. For a given initial vector x(0), find the approximate solution x(k) from the following iterative scheme:
x(k)=(D−rL−tU)−1[(1−w)D+(w−r)L+(w−t)U]x(k−1)+(D−rL−tU)−1bw,k=1,2,3,... |
Algorithm 2.1 is the main iterative scheme that converges to the solution rapidly as compared with other methods. This is the generalized scheme for obtaining the solution of a system of linear equations. We present some special cases.
If t=0, Algorithm 2.1 reduces to the following iterative scheme.
Algorithm 2.2. For a given initial vector x(0), find the approximate solution x(k) from the following technique:
x(k)=(D−rL)−1[(1−w)D+(w−r)L+wU]x(k−1)+(D−rL)−1bw,k=1,2,3,... |
which is well-known AOR method [2,3].
If t=0 and w=r, the Algorithm 2.1 reduces to the following SOR method [2,3].
Algorithm 2.3. For a given initial vector x(0), find the approximate solution x(k) from the following technique:
x(k)=(D−wL)−1[(1−w)D+wU]x(k−1)+(D−wL)−1bw,k=1,2,3,... |
If t=0 and w=r=1, the Algorithm 2.1 reduces to the following scheme.
Algorithm 2.4. For a given initial vector x(0), find the approximate solution x(k) from the following technique:
x(k)=(D−L)−1Ux(k−1)+(D−L)−1b,k=1,2,3,... |
Algorithm 2.4 is Gauss-Seidel method [2,3].
If t=r=0 and w=1, the Algorithm 2.1 reduces to the following scheme.
Algorithm 2.5. For a given initial vector x(0), find the approximate solution x(k) from the following technique:
x(k)=D−1(L+U)x(k−1)+D−1b,k=1,2,3,... |
Algorithm 2.5 is well-known Jacobi method [2,3].
In this section, we consider the convergence analysis of the newly developed iterative scheme mentioned as Algorithm 2.1.
x(k)=(D−rL−tU)−1[(1−w)D+(w−r)L+(w−t)U]x(k−1)+(D−rL−tU)−1bw. |
Lemma 3.1. [2] If the spectral radius satisfies
ρ[(D−rL−tU)−1(1−w)D+(w−r)L+(w−t)U)]≤1, |
then
[I−(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]−1 |
exists and
[I−(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]−1=I+[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]+[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]2+⋯=∞∑j=0[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]j. | (3.1) |
Theorem 3.2. For a given any x(0)∈Rn, the sequence {x(k)}∞k=0 defined by
x(k)=[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]x(k−1)+(D−rL−tU)−1bw, |
for each k≥1, converges to the unique solution
x=[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]x+(D−rL−tU)−1bw, |
if and only if
ρ[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]<1. |
Proof. For the proof of the statement, it is enough to show that spectral radius of iteration matrix <1. For this, let us consider the iterative scheme suggested in Algorithm 2.1.
x(k)=[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]x(k−1)+(D−rL−tU)−1bw, |
which can be rewritten as:
x(k)=[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)][((D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U))x(k−2)+(D−rL−tU)−1bw]+(D−rL−tU)−1bw⋮=[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]kx(0)+[[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]x(k−1)+⋯+[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]+I](D−rL−tU)−1bw. | (3.2) |
Since
ρ([(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)])≤1, |
the matrix converges and
limk→∞[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]kx(0)=0, |
and Lemma 3.1 implies that
limk→∞x(k)=0+limk→∞[k−1∑j=0[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]j](D−rL−tU)−1bw=[I−[(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]]−1(D−rL−tU)−1bw. |
As a result, the sequence x(k) converges to the vector
x=[I−(D−rL−tU)−1((1−w)D+(w−r)L+(w−t)U)]−1(D−rL−tU)−1bw. |
We can also view the convergence criteria of the purposed method as an application of the Banach fixed point theorem [33]. System of linear equations can be described with the relations of parameters in the equation form as:
{x1=(1−wa11)x1−(w−2t)a12x2−…−(w−2t)a1nxn+wb1x2=−(w−2r)a21x1+(1−wa22)x2−…−(w−2t)a2nxn+wb2⋮xn=−(w−2r)an1x1−(w−2r)an2x2−…+(1−wann)xn+wbn. | (3.3) |
This system is equivalent to
x=cx+d | (3.4) |
with d=wb and cij={(1−waij)ifi=j−(w−2t)aijifi<j−(w−2r)aijifi>j.
The solution can be obtained by
x(k+1)=cx(k)+d. | (3.5) |
The iteration method is defined by
xj(k+1)=1cjj(γ−n∑k=1,k≠jcjkx(k)). | (3.6) |
Assuming that cjj≠0 for j=1,...n. This iteration is suggested for the jth equation of the system. It is not difficult to verify that (3.6) can be written in the form of
c=(D−rL−tU)−1[(1−w)D+(w−r)L+(w−t)U], | (3.7) |
and
d=(D−rL−tU)−1wb. | (3.8) |
Here D = diag(cjj) is the diagonal matrix whose non-zero elements are of those of the principle diagonal of A. Condition of diagonally dominant applied to c is sufficient for the convergence of Algorithm 2.1. We can express directly in terms of the elements of A. The result is the row sum criteria for the convergence will be
n∑k=1,k≠j|ajkajj|<1, | (3.9) |
or
n∑k=1,k≠j|ajk|<|ajj|. | (3.10) |
This shows that convergence is guaranteed, if the elements in principle diagonal of A are sufficiently large.
Note that all the components of a new approximation are introduced simultaneously at the end of an iteration cycle.
In this section, we provide few numerical applications to clarify the efficiency of new developed three parameter iterative scheme Algorithm 2.1, on some system of linear equations for 0<t<w<r<2, whose coefficient matrices satisfy
max1≤i≤n−1ui=α and max1≤i≤n−1li=β,α+β≤1, |
where
li=maxi−1∑j=1∣βij∣, for i=2,3,…,n, |
and
ui=maxn∑j=i+1∣αij∣, for i=1,2,…,n−1. |
In this part, we will compare our developed scheme with previous techniques as namely AOR method, SOR method, Jacobi method and Gauss-Seidel method. All computations are calculated by using computer programming by MATLAB. We use ε=10−15 and the following stopping criteria is used for computer programs as:
||x(k)−x(k−1)||||x(k)||≤ε. |
This stopping criteria is deduced from relative error and the infinite sequence generated by the computer code will be chopped at the stage when this criteria is satisfied. We assume the following examples to compare the new developed method Algorithm 2.1 (Alg 2.1) with various iterative methods AOR (Alg 2.2), SOR (Alg 2.3), Gauss-Seidel (Alg 2.4) and Jacobi (Alg 2.5), to analyze the new iterative scheme's feasibility and effectiveness.
For the numerical and graphical comparison of methods, we select some examples from the literature.
Example 4.1. [3] We consider a problem where the loop-current approach is combined with Ohm's law and Kirchhoff's voltage law. Each loop in the network is supposed to be circulated by a loop current. Thus, the loop current I1 cycles the closed-loops a,b,c, and d in the network shown in Figure 1. As a result, the current I1−I2 passes via the link joining b and c.
From the above network as shown in Figure 1, we get a four-variable linear equations system by letting R1=R4=1Ω,R2=2Ω,R3=4Ω and V=5volts. We get the following system of the form
4I1−2I2=5,−2I1+6I2−2I3=0,−2I2+6I3−2I4=0,−2I3+8I4=0. |
Table 1 displays the numerical results for Example 4.1, which indicate that Alg 2.1 is more efficient than the other methods.
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02,r=1.05,t=0.88 | 14 | 8.9966e−18 |
Alg 2.2 | w=1.02,r=1.05 | 28 | 2.8789e−16 |
Alg 2.3 | w=1.02 | 30 | 2.8789e−16 |
Alg 2.4 | ... | 32 | 4.3184e−16 |
Alg 2.5 | ... | 61 | 7.1973e−16 |
In Figure 2 the residual fall of different methods shows that new method is faster convergent than the other methods. Figure 3 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.
Example 4.2. [34] Consider the following system of the form
x1+0.250x2=0.75,0.250x1+x2+0.250x3=1.50,0.250x2+x3+0.250x4=1.50,0.250x3+x4+0.250x5=1.50,0.250x4+x5=1.25. |
Table 2 displays the numerical results which indicate that Alg 2.1 is more efficient than the other techniques.
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.01,r=1.06,t=0.86 | 13 | 5.8249e−16 |
Alg 2.2 | w=1.01,r=1.06 | 19 | 4.8541e−16 |
Alg 2.3 | w=1.01 | 23 | 2.4271e−16 |
Alg 2.4 | ... | 24 | 2.4271e−16 |
Alg 2.5 | ... | 44 | 7.7666e−16 |
The residual fall of different technique can be seen in Figure 4 which illustrate that the new method is rapidly convergent than the other methods. Figure 5 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.
Example 4.3. [2] Consider the following system of linear equations of the form
4x1−x2−x3=1,−x1+4x2−x4=1,−x1+4x3−x4=1,−x2−x3+4x4=1. |
Table 3 displays the numerical results for Example 4.5, which indicate that Alg 2.1 is more efficient than the other methods.
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.05;r=1.07;t=0.9 | 15 | 5.5511e−16 |
Alg 2.2 | w=1.05;r=1.07 | 19 | 9.9920e−16 |
Alg 2.3 | w=1.05 | 22 | 5.5511e−16 |
Alg 2.4 | .... | 28 | 4.4409e−16 |
Alg 2.5 | .... | 51 | 8.8818e−16 |
In Figure 6 the residual fall of different methods shows that New method is faster convergent than the other methods. Figure 7 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.
Example 4.4. [35] Let the matrix A be given by
ai,j={8,ifj=i;−1,if{j=i+1,fori=1,2,…,n−1;j=i−1,fori=2,3,…,n;0,otherwise. |
Let b=(6,5,5,…,5,6)T, we take n=100.
Table 4 displays the numerical results for Example 4.4, which indicate that Alg 2.1 is more efficient than the other techniques.
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02;r=0.97;t=0.50 | 15 | 3.8978e−16 |
Alg 2.2 | w=1.02;r=0.97 | 19 | 7.7956e−16 |
Alg 2.3 | w=1.02 | 19 | 3.8978e−16 |
Alg 2.4 | .... | 20 | 5.1970e−16 |
Alg 2.5 | .... | 27 | 6.4963e−16 |
The residual fall of different methods can be seen in Figure 8 which illustrate that the new method is rapidly convergent than the other methods. Figure 9 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.
Example 4.5. [2,36] Consider the system (1.1), having co-efficient matrix A is given by
aij={2i,ifi=jandi=1,2,…,1000;−1,if{j=i+1,fori=1,2,…,999;j=i−1,fori=2,3,…,1000;0,otherwise. |
and bi=1.5i−6 for each i=1,2,…,1000.
Table 5 shows the numerical results for Example 4.3, which indicate that Alg 2.1 is much more efficient than the other techniques.
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.021;r=1.079;t=0.98 | 13 | 3.6092e−17 |
Alg 2.2 | w=1.021;r=1.079 | 18 | 4.3310e−16 |
Alg 2.3 | w=1.0219 | 20 | 2.8873e−16 |
Alg 2.4 | .... | 22 | 5.7747e−16 |
Alg 2.5 | .... | 41 | 5.7747e−16 |
The residual fall of different techniques can be seen in Figure 10 which illustrate that the new method is rapidly convergent than the other methods. Figure 11 is the comparison of iterations of different algorithms that shows our new iterative method which described in Alg 2.1 is more efficient than other methods described in Alg 2.2–2.5.
In Table 6, IT stands for the number of iterations in above tabular comparison which shows that our new iterative method work much effectively.
Parameters | Example 4.1 | Example 4.2 | Example 4.3 | Example 4.4 | Example 4.5 | ||
w | r | t | IT | IT | IT | IT | IT |
0.2 | 0.7 | 0.9 | 186 | 160 | 181 | 160 | 169 |
0.4 | 0.5 | 0.8 | 102 | 82 | 96 | 77 | 86 |
0.6 | 0.5 | 0.8 | 63 | 50 | 58 | 46 | 52 |
0.3 | 0.8 | 0.5 | 138 | 111 | 132 | 108 | 120 |
0.2 | 0.8 | 0.3 | 229 | 186 | 217 | 174 | 195 |
0.3 | 0.8 | 1.2 | 96 | 97 | 97 | 97 | 95 |
0.3 | 0.8 | 0.2 | 155 | 124 | 145 | 114 | 129 |
0.5 | 0.8 | 0.3 | 85 | 67 | 79 | 61 | 70 |
0.5 | 0.8 | 0.5 | 77 | 61 | 73 | 59 | 66 |
0.8 | 0.4 | 0.4 | 57 | 40 | 50 | 33 | 42 |
0.8 | 0.5 | 0.7 | 46 | 34 | 41 | 30 | 36 |
0.9 | 0.5 | 0.8 | 36 | 27 | 32 | 23 | 28 |
0.9 | 1.04 | 0.5 | 24 | 21 | 22 | 21 | 21 |
1.02 | 1.08 | 0.8 | 15 | 14 | 14 | 12 | 14 |
1.03 | 1.09 | 0.9 | 14 | 13 | 14 | 12 | 13 |
In this article, a new generalized iterative scheme is suggested for solving systems of linear equations. We have studied the convergence criteria of this iterative scheme. This scheme is not only the generalized one but also give good results as compared to the existing schemes. This iterative scheme is also suitable for sparse matrices. Numerical results show that this scheme is more effective than the conventional schemes. We would also like to purpose that the given scheme can be extended for the absolute value problems of the type Ax+B|x|=b.
All authors declare no conflicts of interest in this paper.
The authors would like to thank the editor and the anonymous reviewers for their constructive comments and suggestions, which improved the quality of this paper.
[1] |
S. F. Al-Azzawi, A. S. Al-Obeidi, Dynamical analysis and anti-synchronization of a new 6D model with self-excited attractors, Appl. Math. J. Chin. Univ., 38 (2023), 27–43. https://doi.org/10.1007/s11766-023-3960-0 doi: 10.1007/s11766-023-3960-0
![]() |
[2] |
E. A. Assali, Different control strategies for predefined-time synchronization of nonidentical chaotic systems, Int. J. Syst. Sci., 55 (2024), 119–129. https://doi.org/10.1080/00207721.2023.2268771 doi: 10.1080/00207721.2023.2268771
![]() |
[3] |
S. F. Azzawi, M. M. Aziz, Chaos synchronization of non-linear dynamical systems via a novel analytical approach, Alex. Eng. J., 57 (2018), 3493–3500. https://doi.org/10.1016/j.aej.2017.11.017 doi: 10.1016/j.aej.2017.11.017
![]() |
[4] |
I. Bashkirtseva, L. Ryashko, M. J. Seoane, M. A. F. Sanjuan, Noise-induced complex dynamics and synchronization in the map-based Chialvo neuron model, Commun. Nonlinear Sci., 116 (2023), 106867. https://doi.org/10.1016/j.cnsns.2022.106867 doi: 10.1016/j.cnsns.2022.106867
![]() |
[5] |
E. C. Corrick, R. N. Drysdale, J. C. HeIIstrom, E. Capron, S. O. Rasmussen, X. Zhang, et al., Synchronous timing of abrupt climate changes during the last glacial period, Science, 369 (2020), 963–969. https://doi.org/10.1126/science.aay5538 doi: 10.1126/science.aay5538
![]() |
[6] |
S. Eshaghi, N. Kadkhoda, M. Inc, Chaos control and synchronization of a new fractional Laser chaotic system, Qual. Theory Dyn. Syst., 23 (2024), 241. https://doi.org/10.1007/s12346-024-01097-7 doi: 10.1007/s12346-024-01097-7
![]() |
[7] |
L. L. Huang, W. Y. Li, J. H. Xiang, G. L. Zhu, Adaptive finite-time synchronization of fractional order memristor chaotic system based on sliding-mode control, Eur. Phys. J. Spec. Top., 231 (2022), 3109–3118. https://doi.org/10.1140/epjs/s11734-022-00564-z doi: 10.1140/epjs/s11734-022-00564-z
![]() |
[8] |
A. A. K. Javan, A. Zare, Images encryption based on robust multi-mode finite time synchronization of fractional order hyper-chaotic Rikitake systems, Multimed. Tools Appl., 83 (2024), 1103–1123. https://doi.org/10.1007/s11042-023-15783-2 doi: 10.1007/s11042-023-15783-2
![]() |
[9] |
R. Kengne, R. Tehitnga, A. Mezatio, A. Fomethe, G. Litak, Finite-time synchronization of fractional order simplest two-component chaotic oscillators, Eur. Phys. J. B., 90 (2017), 88. https://doi.org/10.1140/epjb/e2017-70470-8 doi: 10.1140/epjb/e2017-70470-8
![]() |
[10] |
T. Kohyama, Y. Yamagami, H. Miura, S. Kido, H. Tatebe, M. Watanabe, The gulf stream and kuroshio current are synchronized, Science, 374 (2021), 341–346. https://doi.org/10.1126/science.abh3295 doi: 10.1126/science.abh3295
![]() |
[11] |
Q. Lai, G. W. Hu, U. Erkan, A. Toktas, A novel pixel-split image encryption scheme based on 2D Salomon map, Expert Syst. Appl., 213 (2023), 118845. https://doi.org/10.1016/j.eswa.2022.118845 doi: 10.1016/j.eswa.2022.118845
![]() |
[12] |
S. Lahmiri, C. Tadj, C. Gargour, S. Bekiros, Deep learning systems for automatic diagnosis of infantcry signals, Chaos Soliton. Fract., 154 (2022), 111700. https://doi.org/10.1016/j.chaos.2021.111700 doi: 10.1016/j.chaos.2021.111700
![]() |
[13] |
J. Li, J. M. Zheng, Finite-time synchronization of different dimensional chaotic systems with uncertain parameters and external disturbances, Sci. Rep., 12 (2022), 15407. https://doi.org/10.1038/s41598-022-19659-7 doi: 10.1038/s41598-022-19659-7
![]() |
[14] |
J. Li, Z. H. Zhou, S. Wan, Y. L. Zhang, Z. Shen, M. Li, et al., All-optical synchronization of remote optomechanical systems, Phys. Rev. Lett., 129 (2022), 063605. https://doi.org/10.1103/PhysRevLett.129.063605 doi: 10.1103/PhysRevLett.129.063605
![]() |
[15] |
F. N. Lin, G. M. Xue, B. Qin, S. G. Li, H. Liu, Event-triggered finite-time fuzzy control approach for fractional-order nonlinear chaotic systems with input delay, Chaos Soliton. Fract., 175 (2023), 114036. https://doi.org/10.1016/j.chaos.2023.114036 doi: 10.1016/j.chaos.2023.114036
![]() |
[16] |
E. N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci., 20 (1963), 130–141. https://doi.org/10.1007/978-0-387-21830-4-2 doi: 10.1007/978-0-387-21830-4-2
![]() |
[17] |
X. Meng, C. C. Gao, B. P. Jiang, Z. T. Wu, Finite-time synchronization of variable-order fractional uncertain coupled systems via adaptive sliding mode control, Int. J. Control Autom. Syst., 20 (2022), 1535–1543. https://doi.org/10.1007/s12555-021-0051-y doi: 10.1007/s12555-021-0051-y
![]() |
[18] |
S. Q. Pang, Y. Feng, Y. J. Liu, Finite-time synchronization of chaotic systems with different dimension and secure communication, Math. Probl. Eng., 2016 (2016), 7693547. https://doi.org/10.1155/2016/7693547 doi: 10.1155/2016/7693547
![]() |
[19] |
W. Q. Pan, T. Z. Li, Finite-time synchronization of fractional-order chaotic systems with different structures under stochastic disturbances, Journal of Computer and Communications, 9 (2021), 120–137. https://doi.org/10.4236/jcc.2021.96007 doi: 10.4236/jcc.2021.96007
![]() |
[20] |
A. Roulet, C. Bruder, Quanyum synchronization and entanglement generation, Phys. Rev. Lett., 121 (2018), 063601. https://doi.org/10.1103/PhysRevLett.121.063601 doi: 10.1103/PhysRevLett.121.063601
![]() |
[21] |
E. Sakalar, T. Klausberger, B. Lasztoczi, Neurogliaform cells dynamically decouple neuronal synchrony between brain areas, Science, 377 (2022), 324–328. https://doi.org/10.1126/science.abo3355 doi: 10.1126/science.abo3355
![]() |
[22] |
R. Surendar, M. Muthtamilselvan, Sliding mode control on finite-time synchronization of nonlinear hyper mechanical fractional systems, Arab. J. Sci. Eng., 30 (2024), 088581. https://doi.org/10.1007/s13369-024-08858-1 doi: 10.1007/s13369-024-08858-1
![]() |
[23] |
S. Sweetha, R. Sakthivel, S. Harshavarthini, Finite-time synchronization of nonlinear fractional chaotic systems with stochastic faults, Chaos Soliton. Fract., 142 (2021), 110312. https://doi.org/10.1016/j.chaos.2020.110312 doi: 10.1016/j.chaos.2020.110312
![]() |
[24] |
A. Tutueva, L. Moysis, V. Rybin, A. Zubarev, C. Volos, D. Butusov, Adaptive symmetry control in secure communication systems, Chaos Soliton. Fract., 159 (2022), 112181. https://doi.org/10.1016/j.chaos.2022.112181 doi: 10.1016/j.chaos.2022.112181
![]() |
[25] |
V. Vafaei, A. J. Akbarfam, H. Kheiri, A new synchronisation method of fractional-order chaotic systems with distinct orders and dimensions and its application in secure communication, Int. J. Syst. Sci., 52 (2021), 3437–3450. https://doi.org/10.1080/00207721.2020.1836282 doi: 10.1080/00207721.2020.1836282
![]() |
[26] |
S. F. Wang, A 3D autonomous chaotic system: dynamics and synchronization, Indian J. Phys., 98 (2024), 4525–4533. https://doi.org/10.1007/s12648-024-03189-1 doi: 10.1007/s12648-024-03189-1
![]() |
[27] |
C. G. Wei, Y. He, X. C. Shangguan, Y. L. Fan, Master-slave synchronization for time-varying delay chaotic Lur'e systems based on the integral-term-related free-weighting-matrices technique, J. Franklin I., 359 (2022), 9079–9093. https://doi.org/10.1016/j.jfranklin.2022.08.027 doi: 10.1016/j.jfranklin.2022.08.027
![]() |
[28] |
T. Wu, J. H. Park, L. L. Xiong, X. Q. Xie, H. Y. Zhang, A novel approach to synchronization conditions for delayed chaotic Lur'e systems with state sampled-data quantized controller, J. Franklin I., 357 (2020), 9811–9833. https://doi.org/10.1016/j.jfranklin.2019.11.083 doi: 10.1016/j.jfranklin.2019.11.083
![]() |
[29] |
T. Yang, Z. Wang, X. Huang, J. W. Xia, Sampled-data exponential synchronization of Markovian with multiple time delays, Chaos Soliton. Fract., 160 (2022), 112252. https://doi.org/10.1016/j.chaos.2022.112252 doi: 10.1016/j.chaos.2022.112252
![]() |
[30] |
K. Yoshioka-Kobayashi, M. Matsumiya, Y. Niino, A. Isomura, H. Kori, A. Miyawaki, et al., Coupling delay controls synchronized oscillation in the segmentation clock, Nature, 580 (2020), 119–123. https://doi.org/10.1038/s41586-019-1882-z doi: 10.1038/s41586-019-1882-z
![]() |
[31] |
Z. Q. Yu, P. X. Liu, S. Ling, H. Q. Wang, Adaptive finite-time synchronisation of variable-order fractional chaotic systems for secure communication, Int. J. Syst. Sci., 55 (2024), 317–331. https://doi.org/10.1080/00207721.2023.2271621 doi: 10.1080/00207721.2023.2271621
![]() |
[32] |
J. D. Zha, C. B. Li, B. Song, W. Hu, Synchronisation control of composite chaotic systems, Int. J. Syst. Sci., 47 (2016), 3952–3959. https://doi.org/10.1080/00207721.2016.1157224 doi: 10.1080/00207721.2016.1157224
![]() |
[33] |
H. Y. Zhang, D. Y. Meng, J. Wang, G. D. Lu, Synchronization of uncertain chaotic systems via fuzzy-regulated adaptive optimal control approach, Int. J. Syst. Sci., 51 (2020), 473–487. https://doi.org/10.1080/00207721.2020.1716104 doi: 10.1080/00207721.2020.1716104
![]() |
[34] |
R. M. Zhang, D. Q. Zeng, S. M. Zhong, K. B. Shi, Memory feedback PID control for exponential synchronization of chaotic Lur'e systems, Int. J. Syst. Sci., 48 (2017), 2473–2484. https://doi.org/10.1080/00207721.2017.1322642 doi: 10.1080/00207721.2017.1322642
![]() |
[35] |
Z. Q. Zhang, J. D. Cao, Novel finite-time synchronization criteria for inertial neural networks with time delays via integral inequality method, IEEE T. Neur. Net. Lear., 30 (2019), 1476–1485. https://doi.org/10.1109/TNNLS.2018.2868800 doi: 10.1109/TNNLS.2018.2868800
![]() |
[36] |
Z. Q. Zhang, M. Chen, A. L. Li, Further study on finite-time synchronization for delayed inertial neural networks via inequality skills, Neurocomputing, 373 (2020), 15–23. https://doi.org/10.1016/j.neucom.2019.09.034 doi: 10.1016/j.neucom.2019.09.034
![]() |
[37] |
Z. Q. Zhang, J. D. Cao, Finite-time synchronization for fuzzy inertial neural networks by maximum-value approach, IEEE T. Fuzzy Syst., 30 (2022), 1436–1446. https://doi.org/10.1109/TFUZZ.2021.3059953 doi: 10.1109/TFUZZ.2021.3059953
![]() |
[38] |
D. Q. Zeng, K. T. Wu, Y. J. Liu, R. M. Zhang, S. M. Zhong, Event-triggered sampling control for exponential synchronization of chaotic Lur'e systems with time-varying communication delays, Nonlinear Dyn., 91 (2018), 905–921. https://doi.org/10.1007/s11071-017-3918-y doi: 10.1007/s11071-017-3918-y
![]() |
[39] |
S. Zheng, Synchronization analysis of time delay complex-variable chaotic systems with discontinuous coupling, J. Franklin I., 353 (2016), 1460–1477. https://doi.org/10.1016/j.jfranklin.2016.02.006 doi: 10.1016/j.jfranklin.2016.02.006
![]() |
1. | Osama Moaaz, Ahmed E. Abouelregal, Multi-fractional-differential operators for a thermo-elastic magnetic response in an unbounded solid with a spherical hole via the DPL model, 2022, 8, 2473-6988, 5588, 10.3934/math.2023282 | |
2. | Mohamed Karim Bouafoura, Naceur Benhadj Braiek, Suboptimal control synthesis for state and input delayed quadratic systems, 2022, 236, 0959-6518, 944, 10.1177/09596518211067476 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02,r=1.05,t=0.88 | 14 | 8.9966e−18 |
Alg 2.2 | w=1.02,r=1.05 | 28 | 2.8789e−16 |
Alg 2.3 | w=1.02 | 30 | 2.8789e−16 |
Alg 2.4 | ... | 32 | 4.3184e−16 |
Alg 2.5 | ... | 61 | 7.1973e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.01,r=1.06,t=0.86 | 13 | 5.8249e−16 |
Alg 2.2 | w=1.01,r=1.06 | 19 | 4.8541e−16 |
Alg 2.3 | w=1.01 | 23 | 2.4271e−16 |
Alg 2.4 | ... | 24 | 2.4271e−16 |
Alg 2.5 | ... | 44 | 7.7666e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.05;r=1.07;t=0.9 | 15 | 5.5511e−16 |
Alg 2.2 | w=1.05;r=1.07 | 19 | 9.9920e−16 |
Alg 2.3 | w=1.05 | 22 | 5.5511e−16 |
Alg 2.4 | .... | 28 | 4.4409e−16 |
Alg 2.5 | .... | 51 | 8.8818e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02;r=0.97;t=0.50 | 15 | 3.8978e−16 |
Alg 2.2 | w=1.02;r=0.97 | 19 | 7.7956e−16 |
Alg 2.3 | w=1.02 | 19 | 3.8978e−16 |
Alg 2.4 | .... | 20 | 5.1970e−16 |
Alg 2.5 | .... | 27 | 6.4963e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.021;r=1.079;t=0.98 | 13 | 3.6092e−17 |
Alg 2.2 | w=1.021;r=1.079 | 18 | 4.3310e−16 |
Alg 2.3 | w=1.0219 | 20 | 2.8873e−16 |
Alg 2.4 | .... | 22 | 5.7747e−16 |
Alg 2.5 | .... | 41 | 5.7747e−16 |
Parameters | Example 4.1 | Example 4.2 | Example 4.3 | Example 4.4 | Example 4.5 | ||
w | r | t | IT | IT | IT | IT | IT |
0.2 | 0.7 | 0.9 | 186 | 160 | 181 | 160 | 169 |
0.4 | 0.5 | 0.8 | 102 | 82 | 96 | 77 | 86 |
0.6 | 0.5 | 0.8 | 63 | 50 | 58 | 46 | 52 |
0.3 | 0.8 | 0.5 | 138 | 111 | 132 | 108 | 120 |
0.2 | 0.8 | 0.3 | 229 | 186 | 217 | 174 | 195 |
0.3 | 0.8 | 1.2 | 96 | 97 | 97 | 97 | 95 |
0.3 | 0.8 | 0.2 | 155 | 124 | 145 | 114 | 129 |
0.5 | 0.8 | 0.3 | 85 | 67 | 79 | 61 | 70 |
0.5 | 0.8 | 0.5 | 77 | 61 | 73 | 59 | 66 |
0.8 | 0.4 | 0.4 | 57 | 40 | 50 | 33 | 42 |
0.8 | 0.5 | 0.7 | 46 | 34 | 41 | 30 | 36 |
0.9 | 0.5 | 0.8 | 36 | 27 | 32 | 23 | 28 |
0.9 | 1.04 | 0.5 | 24 | 21 | 22 | 21 | 21 |
1.02 | 1.08 | 0.8 | 15 | 14 | 14 | 12 | 14 |
1.03 | 1.09 | 0.9 | 14 | 13 | 14 | 12 | 13 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02,r=1.05,t=0.88 | 14 | 8.9966e−18 |
Alg 2.2 | w=1.02,r=1.05 | 28 | 2.8789e−16 |
Alg 2.3 | w=1.02 | 30 | 2.8789e−16 |
Alg 2.4 | ... | 32 | 4.3184e−16 |
Alg 2.5 | ... | 61 | 7.1973e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.01,r=1.06,t=0.86 | 13 | 5.8249e−16 |
Alg 2.2 | w=1.01,r=1.06 | 19 | 4.8541e−16 |
Alg 2.3 | w=1.01 | 23 | 2.4271e−16 |
Alg 2.4 | ... | 24 | 2.4271e−16 |
Alg 2.5 | ... | 44 | 7.7666e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.05;r=1.07;t=0.9 | 15 | 5.5511e−16 |
Alg 2.2 | w=1.05;r=1.07 | 19 | 9.9920e−16 |
Alg 2.3 | w=1.05 | 22 | 5.5511e−16 |
Alg 2.4 | .... | 28 | 4.4409e−16 |
Alg 2.5 | .... | 51 | 8.8818e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.02;r=0.97;t=0.50 | 15 | 3.8978e−16 |
Alg 2.2 | w=1.02;r=0.97 | 19 | 7.7956e−16 |
Alg 2.3 | w=1.02 | 19 | 3.8978e−16 |
Alg 2.4 | .... | 20 | 5.1970e−16 |
Alg 2.5 | .... | 27 | 6.4963e−16 |
Methods | Parameters | Iterations | Relative error |
Alg 2.1 | w=1.021;r=1.079;t=0.98 | 13 | 3.6092e−17 |
Alg 2.2 | w=1.021;r=1.079 | 18 | 4.3310e−16 |
Alg 2.3 | w=1.0219 | 20 | 2.8873e−16 |
Alg 2.4 | .... | 22 | 5.7747e−16 |
Alg 2.5 | .... | 41 | 5.7747e−16 |
Parameters | Example 4.1 | Example 4.2 | Example 4.3 | Example 4.4 | Example 4.5 | ||
w | r | t | IT | IT | IT | IT | IT |
0.2 | 0.7 | 0.9 | 186 | 160 | 181 | 160 | 169 |
0.4 | 0.5 | 0.8 | 102 | 82 | 96 | 77 | 86 |
0.6 | 0.5 | 0.8 | 63 | 50 | 58 | 46 | 52 |
0.3 | 0.8 | 0.5 | 138 | 111 | 132 | 108 | 120 |
0.2 | 0.8 | 0.3 | 229 | 186 | 217 | 174 | 195 |
0.3 | 0.8 | 1.2 | 96 | 97 | 97 | 97 | 95 |
0.3 | 0.8 | 0.2 | 155 | 124 | 145 | 114 | 129 |
0.5 | 0.8 | 0.3 | 85 | 67 | 79 | 61 | 70 |
0.5 | 0.8 | 0.5 | 77 | 61 | 73 | 59 | 66 |
0.8 | 0.4 | 0.4 | 57 | 40 | 50 | 33 | 42 |
0.8 | 0.5 | 0.7 | 46 | 34 | 41 | 30 | 36 |
0.9 | 0.5 | 0.8 | 36 | 27 | 32 | 23 | 28 |
0.9 | 1.04 | 0.5 | 24 | 21 | 22 | 21 | 21 |
1.02 | 1.08 | 0.8 | 15 | 14 | 14 | 12 | 14 |
1.03 | 1.09 | 0.9 | 14 | 13 | 14 | 12 | 13 |