We introduce a sequence of third and fourth-order iterative schemes for finding the roots of nonlinear equations by using the decomposition technique and Simpson's one-third rule. We also discuss the convergence analysis of our suggested iterative schemes. With the help of different numerical examples, we demonstrate the validity, efficiency and implementation of our proposed schemes.
Citation: Awais Gul Khan, Farah Ameen, Muhammad Uzair Awan, Kamsing Nonlaopon. Some new numerical schemes for finding the solutions to nonlinear equations[J]. AIMS Mathematics, 2022, 7(10): 18616-18631. doi: 10.3934/math.20221024
[1] | Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433 |
[2] | Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Melike Kaplan, W. Eltayeb Ahmed . A novel iterative scheme for solving delay differential equations and third order boundary value problems via Green's functions. AIMS Mathematics, 2024, 9(3): 6468-6498. doi: 10.3934/math.2024315 |
[3] | Sadia Arshad, Iram Saleem, Ali Akgül, Jianfei Huang, Yifa Tang, Sayed M Eldin . A novel numerical method for solving the Caputo-Fabrizio fractional differential equation. AIMS Mathematics, 2023, 8(4): 9535-9556. doi: 10.3934/math.2023481 |
[4] | Xiaopeng Yi, Chongyang Liu, Huey Tyng Cheong, Kok Lay Teo, Song Wang . A third-order numerical method for solving fractional ordinary differential equations. AIMS Mathematics, 2024, 9(8): 21125-21143. doi: 10.3934/math.20241026 |
[5] | Miguel Vivas-Cortez, Usama Asif, Muhammad Zakria Javed, Muhammad Uzair Awan, Yahya Almalki, Omar Mutab Alsalami . A new approach to error inequalities: From Euler-Maclaurin bounds to cubically convergent algorithm. AIMS Mathematics, 2024, 9(12): 35885-35909. doi: 10.3934/math.20241701 |
[6] | Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426 |
[7] | Kamsing Nonlaopon, Farooq Ahmed Shah, Khaleel Ahmed, Ghulam Farid . A generalized iterative scheme with computational results concerning the systems of linear equations. AIMS Mathematics, 2023, 8(3): 6504-6519. doi: 10.3934/math.2023328 |
[8] | G Thangkhenpau, Sunil Panday, Bhavna Panday, Carmen E. Stoenoiu, Lorentz Jäntschi . Generalized high-order iterative methods for solutions of nonlinear systems and their applications. AIMS Mathematics, 2024, 9(3): 6161-6182. doi: 10.3934/math.2024301 |
[9] | Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427 |
[10] | Dong Ji, Yao Yu, Chaobo Li . Fixed point and endpoint theorems of multivalued mappings in convex b-metric spaces with an application. AIMS Mathematics, 2024, 9(3): 7589-7609. doi: 10.3934/math.2024368 |
We introduce a sequence of third and fourth-order iterative schemes for finding the roots of nonlinear equations by using the decomposition technique and Simpson's one-third rule. We also discuss the convergence analysis of our suggested iterative schemes. With the help of different numerical examples, we demonstrate the validity, efficiency and implementation of our proposed schemes.
One of the oldest and most basic problems in mathematics is that of solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. Newton's method is one of the most powerful and well-known iterative methods known to converge quadratically.
In the Adomian decomposition method, the solution is considered in terms of an infinite series, which converges to an exact solution. Chun [5] and Abbasbandy [4] constructed and investigated different higher-order iterative methods by applying the decomposition technique of Adomian [3]. Darvishi and Barati [6] also applied the Adomian decomposition technique to develop Newton-type methods that are cubically convergent for the solution of the system of non-linear equations. Implementation of this Adomian decomposition technique required higher-order derivatives evaluation, which is the major pitfall of this method.
To overcome this drawback, several new techniques have been suggested and analyzed by many researchers. Daftardar-Gejji and Jafari [7] have used different modifications of the Adomian decomposition method [3] and proposed a simple technique that does not need the derivative evaluation of the Adomian polynomial, which is the major advantage of using this technique over Adomian decomposition method. Saqib and Iqbal [13] and Ali et al. [1,2] have used this decomposition technique and developed a family of iterative methods with better efficiency and convergence order for solving the nonlinear equations. Heydari et al. [15,16] proposed several iterative methods including derivative free methods and discussed their convergence. For finding multiple roots of nonlinear equations and iterative schemes using homotopy perturbation techniques, see [17,18].
Weerakon and Fernando [14] improved the convergence of the Newton method using the quadrature rule. Later on, Ozban [11] investigated some new variant forms of the Newton method by using the concept of harmonic mean and mid-point rule. Noor [10] developed the fifth-order convergent iterative method using the Gaussian quadrature formula and investigated its efficacy compared to the existing methods in the literature.
In this paper, we consider the well know fixed-point iterative method in which we rewrite the nonlinear equation Λ(℘)=0 as ℘=Υ(℘). We present and introduce some new iterative methods. We also determine the convergence analysis of proposed methods. Some numerical examples are presented to make a comparative study of newly constructed methods with known third and fourth-order convergent iterative algorithms.
This section comprises some new multi-step third and fourth-order convergent iterative methods in view of Simpson's one-third rule and decomposition technique [7].
Consider the nonlinear equation
Λ(℘)=0, | (2.1) |
which is equivalent to
℘=Υ(℘). | (2.2) |
Assume that α is the simple root of nonlinear Eq (2.1) and γ is the initial guess sufficiently close to the root. Using fundamental theorem of calculus and Simpson's one-third quadrature formula, we have
∫℘γΥ′(℘)d℘=℘−γ6{Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘)},Υ(℘)=Υ(γ)+℘−γ6{Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘)}. | (2.3) |
Now, from (2.2) and (2.3), we have
℘=Υ(γ)+16(℘−γ)(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘)). | (2.4) |
Now, using the technique of He [8], the nonlinear Eq (2.1) can be written as an equivalent coupled system of equations
℘=Υ(γ)+16(℘−γ)(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))+H(℘), |
and
H(℘)=Υ(℘)−Υ(γ)−16(℘−γ)(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))=℘[1−16(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))]+γ6(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))−Υ(γ), | (2.5) |
from which, it follows that
℘=H(℘)1−16(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))+Υ(γ)−γ6(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))1−16(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))=c+M(℘), | (2.6) |
where
c=γ, | (2.7) |
and
M(℘)=H(℘)1−16(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘+γ2)+Υ′(℘)). | (2.8) |
It is clear that M(℘) is nonlinear operator. Now, we establish the sequence of higher-order iterative methods implementing the decomposition technique presented by Daftardar-Gejji and Jafari [7]. In this technique, the solution of (2.1) can be represented in terms of infinite series.
℘=∞∑i=0℘i. | (2.9) |
Here, the operator M(℘) can be decomposed as:
M(℘)=M(℘∘)+∞∑i=1{M(i∑j=0℘j)−M(i−1∑j=0℘j)}. | (2.10) |
Thus, from (2.6), (2.9) and (2.10), we have
∞∑i=1℘i=c+M(℘∘)+∞∑i=1{M(i∑j=0℘j)−M(i−1∑j=0℘j)}, | (2.11) |
which generates the following iterative schemes:
℘o=c,℘1=M(℘o),℘2=M(℘0+℘1)−M(℘0),⋮℘n+1=M(n∑j=0℘j)−M(n−1∑j=0℘j),n=0,1,2,…. | (2.12) |
Consequently, it follows that
℘1+℘2+⋯+℘n+1=M(℘0+℘1+℘2+⋯+℘n), |
and
℘=c+∞∑i=1℘i. | (2.13) |
It is noted that ℘ is approximated by
Un=(℘0+℘1+℘2+⋯+℘n), |
and
limx→∞Un=℘. |
For n=0,
℘≈U0=℘0=c=γ. | (2.14) |
From (2.5), it can easily be computed as:
H(℘0)=0. |
Using (2.8), we get
℘1=M(℘0)=H(℘0)1−16(Υ′(γ)+4Υ′(℘0+γ2)+Υ′(℘0))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+γ2)+Υ′(℘0))=Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+γ2)+Υ′(℘0)). |
For n=1,
℘≈U1=℘0+℘1=℘0+M(℘0)=γ+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+γ2)+Υ′(℘0)). |
Using (2.14), we have
℘=Υ(γ)−γΥ′(γ)1−Υ′(γ). | (2.15) |
This fixed point formulation is used to suggest the following algorithm {for solving the nonlinear equation Λ(℘)=0}.
Algorithm 2.1. For a given ℘0 (initial guess), approximate solution ℘n+1 is computed by the following iterative scheme
℘n+1=Υ(℘n)−γΥ′(℘n)1−Υ′(℘n). | (2.16) |
Kang et al. [9] developed this algorithm and proved that Algorithm 2.1 has quadratic convergence.
Form (2.5) and (2.8), we have
H(℘0+℘1)=Υ(℘0+℘1)−Υ(γ)−16(℘0+℘1−γ)(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1)). |
Thus
℘1+℘2=M(℘0+℘1)=H(℘0+℘1)1−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))=11−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))×(Υ(℘0+℘1)−Υ(γ)−16(℘0+℘1−γ)(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1)))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))=11−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))×(Υ(℘0+℘1)−16(℘0+℘1−γ)(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))−γ). |
For n=2,
℘≈U2=℘0+℘1+℘2=c+M(℘0+℘1)=γ+11−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))×(Υ(℘0+℘1)−16(℘0+℘1−γ)(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))−γ)=11−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0))×(Υ(℘0+℘1)−16(℘0+℘1)(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1))). |
Take
℘0+℘1=v=Υ(γ)−γΥ′(γ)1−Υ′(γ)=Υ(v)−16v(Υ′(γ)+4Υ′(v+γ2)+Υ′(v))1−16(Υ′(γ)+4Υ′(v+γ2)+Υ′(v)). |
This relation yields the following two-step method for solving the nonlinear equation {Λ(℘)=0}.
Algorithm 2.2. For a given initial guess ℘0, the approximated solution ℘n+1 can be computed by the following iterative schemes:
vn=Υ(℘n)−℘nΥ′(℘n)1−Υ′(℘n), | (2.17) |
℘n+1=Υ(vn)−16vn(Υ′(℘n)+4Υ′(vn+℘n2)+Υ′(vn))1−16(Υ′(℘℘)+4Υ′(vn+℘n2)+Υ′(vn)), | (2.18) |
where n=0,1,2,….
It is noted that
℘0+℘1+℘2=w=11−16(Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0))[Υ(℘0+℘1)−16(℘0+℘1)Υ′(γ)+4Υ′(℘0+℘1+γ2)+Υ′(℘0+℘1)]. | (2.19) |
Using (2.5) and (2.8), we can write
H(℘0+℘1+℘2)=Υ(℘0+℘1+℘2)−Υ(γ)−16(℘0+℘1+℘2−γ)(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2)), |
and
℘1+℘2+℘3=M(℘0+℘1+℘2)=H(℘0+℘1+℘2)1−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))=11−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))×(Υ(℘0+℘1+℘2)−Υ(γ)−16(℘0+℘1+℘2−γ)Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))+Υ(γ)−γ1−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))=11−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))[Υ(℘0+℘1+℘2)−16(℘0+℘1+℘2−γ)(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))−γ]. |
For n=3,
℘≈U3=℘0+℘1+℘2+℘3=℘0+M(℘0+℘1+℘2).=γ+11−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))[Υ(℘0+℘1+℘2)−16(℘0+℘1+℘2−γ)(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))−γ]=11−16(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))×[Υ(℘0+℘1+℘2)−16(℘0+℘1+℘2)(Υ′(γ)+4Υ′(℘0+℘1+℘2+γ2)+Υ′(℘0+℘1+℘2))]. |
Using (2.19), we have
℘=Υ(w)−16w(Υ′(γ)+4Υ′(w+γ2)+Υ′(w))1−16(Υ′(γ)+4Υ′(w+γ2)+Υ′(w)). |
Using this relation, we suggest the following three-step method for solving nonlinear Eq (2.1).
Algorithm 2.3. For a given initial guess ℘0, compute the approximate solution ℘n+1 by the following iterative scheme.
vn=Υ(℘n)−℘nΥ′(℘n)1−Υ′(℘n), |
wn=Υ(vn)1−16(Υ′(℘℘)+4Υ′(vn+℘n2)+Υ′(vn))−16vn(Υ′(℘n)+4Υ′(vn+℘n2)+Υ′(vn))1−16(Υ′(℘n)+4Υ′(vn+℘n2)+Υ′(vn)), | (2.20) |
and
℘n+1=Υ(wn)1−16(Υ′(℘℘)+4Υ′(wn+℘n2)+Υ′(wn))−16wn(Υ′(℘n)+4Υ′(wn+℘n2)+Υ′(wn))1−16(Υ′(℘n)+4Υ′(wn+℘n2)+Υ′(wn)), | (2.21) |
where n=0,1,2,….
This section comprises convergence analysis of proposed Algorithms 2.2 and 2.3. It is shown that these methods are third and fourth-order convergent, respectively.
Theorem 3.1. Let I⊂R be an open interval and Λ:I→R is differential function. If β∈I be the {simple root} of Λ(℘)=0 and ℘0 is sufficiently close to β then multi-step method defined by Algorithm 2.2 has third order ofconvergence.
Proof. Let β be the root of nonlinear equation Λ(℘)=0, or equivalently ℘=Υ(℘). Let en and en+1 be the errors at nth and (n+1) iterations, respectively.
Now, expanding Υ(℘) and Υ′(℘) by using Taylor series about β, we have
Υ(℘n)=β+enΥ′(β)+e2nΥ′′(β)2+e3nΥ′′′(β)6+O(e4n), | (3.1) |
and
Υ′(℘n)=Υ′(β)+enΥ′′(β)+e2nΥ′′′(β)2+e3nΥiv(β)6+O(e4n). | (3.2) |
Thus, we have
Υ(℘n)−℘nΥ′(℘n)=β−βΥ′(β)−βΥ′′(β)en−12(Υ′′(β)+βΥ′′′(β))e2n−16(2Υ′′′(β)+βΥiv(β))e3n+O(e4n), |
1−Υ′(℘n)=1−Υ′(β)−enΥ′′(β)−e2nΥ′′′(β)2−e3nΥiv(β)6+O(e4n), |
and
Υ(℘n)−℘nΥ′(℘n)1−Υ′(℘n)=β+Υ′′(β)2(−1+Υ′(β))e2n−2Υ′′′(β)−2Υ′′′(β)Υ′(β)+3Υ′′2(β)6(−1+Υ′(β))2e3n+O(e4n). |
From (2.16), we have
℘n+1=β+Υ′′(β)2(−1+Υ′(β))e2n−2Υ′′′(β)−2Υ′′′(β)Υ′(β)+3Υ′′2(β)6(−1+Υ′(β))2e3n+112(−1+Υ′(β))3(2Υiv(β)−4Υiv(β)Υ′(β)+2Υiv(β)Υ′2(β)+7Υ′′(β)Υ′′′(β)−7Υ′′(β)Υ′′′(β)Υ′(β)+6Υ′′3(β))e4n+O(e5n). |
Using (2.17), we obtain
vn=β+Υ′′(β)2(−1+Υ′(β))e2n−2Υ′′′(β)−2Υ′′′(β)Υ′(β)+3Υ′′2(β)6(−1+Υ′(β))2e3n+112(−1+Υ′(β))3(2Υiv(β)−4Υiv(β)Υ′(β)+2Υiv(β)Υ′2(β)+7Υ′′(β)Υ′′′(β)−7Υ′′(β)Υ′′′(β)Υ′(β)+6Υ′′3(β))e4n+O(e5n). |
Expanding Υ(vn) in terms of Taylor series about β, we get
![]() |
(3.3) |
Expanding Υ′(vn) in terms of Taylor series about β, we get
Υ′(vn)=Υ′(β)+12(−1+Υ′(β))Υ′′(β)e2n+16(−1+Υ′(β))2(−2Υ′′′(β)+2Υ′′′(β)Υ′(β)−3Υ′′2(β))e3n+124(−1+Υ′(β))3Υ′′(β)(4Υiv(β)−8Υiv(β)Υ′(β)+4Υiv(β)Υ′2(β)+11Υ′′(β)Υ′′′(β)−11Υ′′(β)Υ′′′(β)Υ′(β)+12Υ′′3(β))e4n+O(e5n). | (3.4) |
Expanding Υ′(vn+℘n2) in terms of Taylor series about β, we get
Υ′(vn+℘n2)=Υ′(β)+Υ′′(β)2en+18(−1+Υ′(β))(2Υ′′2(β)+Υ′′′(β)Υ′(β)−Υ′′′(β))e2n+148(−1+Υ′(β))2(−14Υ′′(β)Υ′′′(β)+14Υ′′(β)Υ′′′(β)Υ′(β)+Υiv(β)−12Υ′′3(β)−2Υiv(β)Υ′(β)+Υiv(β)Υ′2(β))e3n+196(−1+Υ′(β))3(11Υiv(β)Υ′′(β)−22Υiv(β)Υ′(β)Υ′′(β)+11Υiv(β)Υ′2(β)Υ′′(β)+37Υ′′2(β)Υ′′′(β)−37Υ′′2(β)Υ′′′(β)Υ′(β)+24Υ′′4(β)+8Υ′′′2(β)−16Υ′′′3(β)Υ′(β)+8Υ′2(β)Υ′′′2(β))e4n+O(e5n). | (3.5) |
From (3.2), (3.4) and (3.5), we have
16[Υ′(vn)+4Υ′(vn+℘n2)+Υ′(℘n)]=Υ′(β)+12Υ′′(β)en+112Υ′′′(β)e2n+112(−1+Υ′(β))5Υ′′2(β)e2n+136Υiv(β)e3n+136(−1+Υ′(β))25Υ′′(β)(2Υ′(β)Υ′′′(β)−2Υ′′′(β)−3Υ′′2(β))e3n+1144Υv(β)e4n+1144(−1+Υ′(β))35Υ′′(β)(3Υiv(β)+12Υ′′3(β)−6Υ′(β)Υiv(β)+3Υ′2(β)Υiv(β)+14Υ′′(β)Υ′′′(β)−14Υ′(β)Υ′′(β)Υ′′′(β))e4n+O(e5n). | (3.6) |
Now,
![]() |
(3.7) |
Using (3.3) and (3.7), we have
![]() |
(3.8) |
Using (3.6), we have
1−16[Υ′(vn)+4Υ′(vn+℘n2)+Υ′(℘n)]=1−Υ′(β)−12Υ′′(β)en−112Υ′′′(β)e2n−112(−1+Υ′(β))5Υ′′2(β)e2n−136Υiv(β)e3n−136(−1+Υ′(β))25Υ′′(β)(2Υ′(β)Υ′′′(β)−2Υ′′′(β)−3Υ′′2(β))e3n−1144Υv(β)e4n−1144(−1+Υ′(β))35Υ′′(β)(3Υiv(β)+12Υ′′3(β)−6Υ′(β)Υiv(β)+3Υ′2(β)Υiv(β)+14Υ′′(β)Υ′′′(β)−14Υ′(β)Υ′′(β)Υ′′′(β))e4n+O(e5n). | (3.9) |
Dividing (3.8) and (3.9), we have
Υ(vn)−vn6[Υ′(vn)+4Υ′(vn+℘n2)+Υ′(℘n)]1−16[Υ′(vn)+4Υ′(vn+℘n2)+Υ′(℘n)]=β+14(−1+Υ′(β))2Υ′′2(β)e3n+1144(−1+Υ′(β))3(30Υ′(β)Υ′′(β)Υ′′(β)−30Υ′′(β)Υ′′′(β)−24Υ′′3(β))e4n+O(e5n). |
Using (2.18), we have
℘n+1=β+14(−1+Υ′(β))2Υ′′2(β)e3n+1144(−1+Υ′(β))3(30Υ′(β)Υ′′(β)Υ′′′(β)−30Υ′′(β)Υ′′′(β)−24Υ′′3(β))e4n+O(e5n). | (3.10) |
Therefore,
en+1=14(−1+Υ′(β))2Υ′′2(β)e3n+1144(−1+Υ′(β))3(30Υ′(β)Υ′′(β)Υ′′′(β)−30Υ′′(β)Υ′′′(β)−24Υ′′3(β))e4n+O(e5n). |
This shows that Algorithm 2.2 has third-order of convergence.
Theorem 3.2. Let I⊂R be an open interval and Λ:I→R is differential function. If β∈I be the {simple root} of Λ(℘)=0 and ℘0 is sufficiently close to β then multi-step method defined by Algorithm 2.3 has fourth order ofconvergence.
Proof. From (3.10), we have
wn=β+14(−1+Υ′(β))2Υ′′2(β)e3n+1144(−1+Υ′(β))3(30Υ′(β)Υ′′(β)Υ′′′(β)−30Υ′′(β)Υ′′′(β)−24Υ′′3(β))e4n+O(e5n). |
Expanding Υ(wn), in terms of Taylor's series
![]() |
(3.11) |
Expanding Υ′(wn), in terms of Taylor's series
![]() |
(3.12) |
Expanding Υ′(wn+℘n2) in terms of Taylor's series
Υ′(wn+℘n2)=Υ′(β)+12Υ′′(β)en+14(−1+Υ′(β))2Υ′′3(β)e3n+148(−1+Υ′(β))3Υ′′(β)(15Υ′(β)Υ′′(β)Υ′′′(β)−15Υ′′(β)Υ′′′(β)−12Υ′′3)(β)e4n+O(e5n). | (3.13) |
Using (3.2), (3.12) and (3.13), we have
16[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)]=Υ′(β)+12Υ′′(β)en+112Υ′′′(β)e2n+124Υiv(β)e3n+124(−1+Υ′(β))25Υ′′3(β)e3n+1144Υv(β)e4n+136(−1+Υ′(β))35Υ′′(β)(15Υ′(β)Υ′′(β)Υ′′′(β)−15Υ′′(β)Υ′′′(β)−12Υ′′3(β))e4n+O(e5n). | (3.14) |
Now,
wn6[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)]=βΥ′(β)+12βΥ′′(β)en+112βΥ′′′(β)e2n+124βΥiv(β)e3n+124(−1+Υ′(β))25βΥ′′3(β)e3n+1144βΥv(β)e4n+136(−1+Υ′(β))35Υ′′(β)(15Υ′(β)Υ′′(β)Υ′′′(β)−15Υ′′(β)Υ′′′(β)−12Υ′′3(β))e4n+18(−1+Υ′(β))2Υ′′3(β)e4n+O(e5n). | (3.15) |
Using (3.11) and (3.15), we have
Υ(wn)−wn6[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)] | (3.16) |
=β−βΥ′(β)−12βΥ′′(β)en−112βΥ′′′(β)e2n−124βΥiv(β)e3n−124(−1+Υ′(β))25βΥ′′3(β)e3n−1144βΥv(β)e4n−136(−1+Υ′(β))35Υ′′(β)(15Υ′(β)Υ′′(β)Υ′′′(β)−15Υ′′(β)Υ′′′(β)−12Υ′′3(β))e4n−18(−1+Υ′(β))2Υ′′3(β)e4n+O(e5n). | (3.17) |
Using (3.14), we have
1−16[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)]=1−Υ′(β)−12Υ′′(β)en−112Υ′′′(β)e2n−124Υiv(β)e3n+O(e4n). | (3.18) |
Dividing (3.17) and (3.18), we have
Υ(wn)−wn6[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)]1−16[Υ′(wn)+4Υ′(wn+℘n2)+Υ′(℘n)]=β+18(−1+Υ′(β))2Υ′′3(β)e4n+O(e5n). |
Using (2.21), we have
℘n+1=β+18(−1+Υ′(β))2Υ′′3(β)e4n+O(e5n). |
Therefore,
en+1=1(−1+Υ′(β))2Υ′′3(β)e4n+O(e5n). |
This shows that Algorithm 2.3 has fourth order of convergence.
This section elaborates on the efficacy of algorithms introduced in this paper with the support of examples. We obtain an estimated simple root rather than the exact based on the exactness of the computer. We utilize ε=10−5, for computational work we use the following stopping criteria |℘n+1−℘n|<ε.
We make a comparative representation Newton method (NM), Halley method (HM), Algorithms 2A [12], 2B [12] and 2C [12] with Algorithms 2.2 and 2.3. In tables, we also displayed the number of iterations (IT), the approximate root ℘n and the value of Λ(℘n).
We solved all test problems and calculated the CPU time consumptions in second with the aid of the computer software Maple 17.
Example 4.1. For Λ(℘)=℘3−10,Υ(℘)=√10℘ and ℘0=1.5.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM[12] | 5 | 2.1544346900319185890 | 4.85518e−13 | 2.740789522304e−7 | 0.218 |
HM[12] | 5 | 2.1544346900319185890 | 4.85518e−13 | 2.740789522304e−7 | 0.453 |
Algorithm 2A[12] | 4 | 2.1544346900318834557 | −3.7048e−15 | 4.78781787078e−8 | 0.390 |
Algorithm 2B[12] | 3 | 2.1544346900318837218 | 1.0e−18 | 3.6257326037e−9 | 0.312 |
Algorithm 2C[12] | 3 | 2.1544346900318837217 | −8.0e−19 | 1.457e−16 | 0.140 |
Algorithm 2.2 | 3 | 2.1544346900318837216 | −2.2e−18 | 4.4395190876e−9 | 0.500 |
Algorithm 2.3 | 3 | 2.1544346900318837217 | −8.0e−19 | 2.645e−16 | 0.281 |
Example 4.2. For Λ(℘)=cos(℘)−℘,Υ(℘)=cos(℘) and ℘0=1.7.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM [12] | 4 | 0.73908513321516087614 | −3.9244e−16 | 3.258805388731e−8 | 0.359 |
HM [12] | 4 | 0.73908513321516087614 | 3.9244e−16 | 3.258805388731e−18 | 0.250 |
Algorithm 2A [12] | 4 | 0.73908513321516087612 | −3.9240e−16 | 3.258805388731e−8 | 0.578 |
Algorithm 2B [12] | 3 | 0.73908513321516064166 | −1.0e−20 | 8.63747112426e−9 | 0.203 |
Algorithm 2C [12] | 3 | 0.73908513321516064166 | −1.0e−20 | 3.72234e−15 | 0.187 |
Algorithm 2.2 | 3 | 0.73908513321516064166 | −1.0e−20 | 6.04698535295e−9 | 0.218 |
Algorithm 2.3 | 3 | 0.73908513321516064168 | −4.0e−20 | 1.41006e−15 | 0.062 |
Example 4.3. For Λ(℘)=(℘−1)3−1,Υ(℘)=1+√1(℘−1) and ℘0=3.5.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM [12] | 6 | 2.0000000000 | 8.63270019e−11 | −0.53642860080240e−5 | 0.859 |
HM [12] | 6 | 2.0000000000 | 8.63270019e−11 | 0.53642860080240e−5 | 0.531 |
Algorithm 2A [12] | 5 | 1.9999999999 | −3.0e−19 | 5.15552777e−11 | 0.500 |
Algorithm 2B [12] | 3 | 2.0000000000 | 1.95e−17 | −0.47082591839347e−5 | 0.734 |
Algorithm 2C [12] | 3 | 1.9999999999 | −3.0e−19 | 1.408550753e−11 | 0.468 |
Algorithm 2.2 | 3 | 2.0000000000 | 3.3e−18 | 0.25348368459034e−5 | 0.546 |
Algorithm 2.3 | 3 | 2.0000000000 | 6.0e−19 | 3.64102778e−11 | $ 0.828 |
Example 4.4. For Λ(℘)=e℘2+7℘−30−1,Υ(℘)=17(30−℘2) and ℘0=3.5.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM [12] | 10 | 3.0000001961 | 0.2550069197e−18 | 0.1725678800759161e−3 | 2.031 |
HM [12] | 10 | 3.0000001961 | 0.2550069197e−5 | 0.1725678800759161e−3 | 1.640 |
Algorithm 2A [12] | 4 | 3.0000000000 | 0 | 4.60291802e−11 | 1.015 |
Algorithm 2B [12] | 3 | 3.0000000000 | 0 | 1.7047845e−12 | 0.984 |
Algorithm 2C [12] | 3 | 2.9999999999 | 0.9999999999 | 1.0e−19 | 0.781 |
Algorithm 2.2 | 3 | 2.9999999999 | −2.00e−18 | 1.7047846e−12 | 0.203 |
Algorithm 2.3 | 3 | 2.9999999999 | −2.00e−18 | 1.0e−19 | 1.656 |
Example 4.5. For Λ(℘)=sin2(℘)−℘2+1,Υ(℘)=sin(℘)+1sin(℘)+℘ and ℘0=1.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM [12] | 5 | 1.4044916482156470349 | 7.591622e−13 | 6.247205954873e−7 | 4.500 |
HM [12] | 5 | 1.4044916482156470349 | 7.591622e−13 | 6.247205954873e−7 | 2.703 |
Algorithm 2A [12] | 4 | 1.4044916482153412269 | −2.1e−18 | 1.8258042740e−9 | 1.734 |
Algorithm 2B [12] | 3 | 1.4044916482153412261 | −2.0e−19 | 4.830530998e−10 | 1.656 |
Algorithm 2C [12] | 3 | 1.4044916482153412261 | −2.0e−19 | 3.27e−17 | 1.546 |
Algorithm 2.2 | 3 | 1.4044916482153412259 | 3.8e−19 | 3.039794094e−10 | 1.203 |
Algorithm 2.3 | 3 | 1.4044916482153412260 | 1.0e−19 | 9.9e−18 | 1.015 |
Example 4.6. For Λ(℘)=e℘−3℘2,Υ(℘)=√e℘3 and ℘0=0.8.
Methods | IT | ℘n | Λ(℘n) | δ=∣℘n−℘n−1∣ | CPU Time |
NM [12] | 4 | 0.91000757248870906 | −2.3e−18 | 1.13400774851e−9 | 4.796 |
HM [12] | 4 | 0.91000757248870906 | −2.3e−18 | 1.13400774848e−9 | 3.015 |
Algorithm 2A [12] | 3 | 0.91000757248844157 | 7.959612e−13 | 0.113206254671831e−5 | 2.281 |
Algorithm 2B [12] | 3 | 0.91000757248870906 | −1.0e−19 | 5.97372e−15 | 1.953 |
Algorithm 2C [12] | 2 | 0.91000757248870906 | 2.0e−19 | 0.114201199652787e−5 | 1.828 |
Algorithm 2.2 | 3 | 0.91000757248870906 | 0 | 5.89807e−15 | 1.484 |
Algorithm 2.3 | 2 | 0.91000757248870906 | 2.0e−19 | 0.113215100426461e−5 | 0.234 |
We have established two new algorithms of third and fourth-order convergence by using a modified decomposition technique for the coupled systems. We have discussed the convergence analysis of these newly established algorithms. With the help of test examples, the computational comparison has been made with well-known third and fourth-order convergent iterative methods. It has been observed from Examples 4.1 to 4.6 that our CPU time and accuracy are much better than the existing algorithms in some cases.
This research was supported by the Fundamental Fund of Khon Kaen University, Thailand.
The authors declare that they have no competing interests.
[1] |
F. Ali, W. Aslam, K. Ali, M.A. Anwar, A. Nadeem, New family of iterative methods for solving nonlinear models, Discrete Dyn. Nat. Soc., 2018 (2018), 9619680. https://doi.org/10.1155/2018/9619680 doi: 10.1155/2018/9619680
![]() |
[2] |
F. Ali, W. Aslam, I. Khalid, A. Nadeem, Iteration methods with an auxiliary function for nonlinear equations, J. Math., 2020 (2020), 7356408. https://doi.org/10.1155/2020/7356408 doi: 10.1155/2020/7356408
![]() |
[3] | G. Adomian, Nonlinear stochastic systems theory and applications to physics, Kluwer Academic Publishers, 1989. |
[4] |
S. Abbasbandy, Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput., 145 (2003), 887–893. https://doi.org/10.1016/S0096-3003(03)00282-0 doi: 10.1016/S0096-3003(03)00282-0
![]() |
[5] |
C. Chun, Iterative methods improving Newton's method by the decomposition method, Comput. Math. Appl., 50 (2005), 1559–1568. https://doi.org/10.1016/j.camwa.2005.08.022 doi: 10.1016/j.camwa.2005.08.022
![]() |
[6] |
M. T. Darvishi, A. Barati, A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput., 187 (2007), 630–635. https://doi.org/10.1016/j.amc.2006.08.080 doi: 10.1016/j.amc.2006.08.080
![]() |
[7] |
V. Daftardar-Gejji, H. Jafari, An iterative method for solving nonlinear functional equations, J. Math. Anal. Appl., 316 (2006), 753–763. https://doi.org/10.1016/j.jmaa.2005.05.009 doi: 10.1016/j.jmaa.2005.05.009
![]() |
[8] |
J. H. He, A new iteration method for solving algebraic equations, Appl. Math. Comput., 135 (2003), 81–84. https://doi.org/10.1016/S0096-3003(01)00313-7 doi: 10.1016/S0096-3003(01)00313-7
![]() |
[9] |
S. M. Kang, A. Rafiq, Y. C. Kwun, A new second-order iteration method for solving nonlinear equations, Abstr. Appl. Anal., 2013 (2013), 487062. https://doi.org/10.1155/2013/487062 doi: 10.1155/2013/487062
![]() |
[10] | M. A. Noor, Fifth-order convergent iterative method for solving nonlinear equations using quadrature formula, JMCSA, 4 (2018), 0974–0570. |
[11] |
A. Y. Ozban, Some new variants of Newton's method, Appl. Math. Lett., 17 (2004), 677–682. https://doi.org/10.1016/S0893-9659(04)90104-8 doi: 10.1016/S0893-9659(04)90104-8
![]() |
[12] | G. Sana, M. A. Noor, K. I. Noor, Some multistep iterative methods for nonlinear equation using quadrature rule, Int. J. Anal. Appl., 18 (2020), 920–938. |
[13] |
M. Saqib, M. Iqbal, Some multi-step iterative methods for solving nonlinear equations, Open J. Math. Sci., 1 (2017), 25–33. https://doi.org/10.30538/oms2017.0003 doi: 10.30538/oms2017.0003
![]() |
[14] |
S. Weerakoon, T. Fernando, A variant of Newton's method with accelerated third-order convergence, Appl. Math. Lett., 13 (2000), 87–93. https://doi.org/10.1016/S0893-9659(00)00100-2 doi: 10.1016/S0893-9659(00)00100-2
![]() |
[15] | M. Heydari, S. M. Hosseini, G. B. Loghmani, Convergence of a family of third-order methods free from second derivatives for finding multiple roots of nonlinear equations, World Appl. Sci. J., 11 (2010), 507–512. |
[16] |
M. Heydari, S. M. Hosseini, G. B. Loghmani, On two new families of iterative methods for solving nonlinear equations with optimal order, Appl. Anal. Discrete Math., 5 (2011), 93–109. https://doi.org/10.2298/AADM110228012H doi: 10.2298/AADM110228012H
![]() |
[17] | M. Heydari, G. B. Loghmani, Third-order and fourth-order iterative methods free from second derivative for finding multiple roots of nonlinear equations, CJMS, 3 (2014), 67–85. |
[18] |
M. M. Sehati, S. M. Karbassi, M. Heydari, G. B. Loghmani, Several new iterative methods for solving nonlinear algebraic equations incorporating homotopy perturbation method (HPM), Int. J. Phys. Sci., 7 (2012), 5017–5025. https://doi.org/10.5897/IJPS12.279 doi: 10.5897/IJPS12.279
![]() |
1. | Asfaw Tsegaye Moltot, Alemayehu Tamirie Deresse, S. A. Edalatpanah, Approximate Analytical Solution to Nonlinear Delay Differential Equations by Using Sumudu Iterative Method, 2022, 2022, 1687-9139, 1, 10.1155/2022/2466367 | |
2. | Miguel Vivas-Cortez, Naseem Zulfiqar Ali, Awais Gul Khan, Muhammad Uzair Awan, Numerical Analysis of New Hybrid Algorithms for Solving Nonlinear Equations, 2023, 12, 2075-1680, 684, 10.3390/axioms12070684 |