Research article Special Issues

Generalized high-order iterative methods for solutions of nonlinear systems and their applications

  • In this paper, we have constructed a family of three-step methods with sixth-order convergence and a novel approach to enhance the convergence order p of iterative methods for systems of nonlinear equations. Additionally, we propose a three-step scheme with convergence order p+3 (for p3) and have extended it to a generalized (m+2)-step scheme by merely incorporating one additional function evaluation, thus achieving convergence orders up to p+3m, mN. We also provide a thorough local convergence analysis in Banach spaces, including the convergence radius and uniqueness results, under the assumption of a Lipschitz-continuous Fréchet derivative. Theoretical findings have been validated through numerical experiments. Lastly, the performance of these methods is showcased through the analysis of their basins of attraction and their application to systems of nonlinear equations.

    Citation: G Thangkhenpau, Sunil Panday, Bhavna Panday, Carmen E. Stoenoiu, Lorentz Jäntschi. Generalized high-order iterative methods for solutions of nonlinear systems and their applications[J]. AIMS Mathematics, 2024, 9(3): 6161-6182. doi: 10.3934/math.2024301

    Related Papers:

    [1] Kasmita Devi, Prashanth Maroju . Local convergence study of tenth-order iterative method in Banach spaces with basin of attraction. AIMS Mathematics, 2024, 9(3): 6648-6667. doi: 10.3934/math.2024324
    [2] Kasmita Devi, Prashanth Maroju . Complete study of local convergence and basin of attraction of sixth-order iterative method. AIMS Mathematics, 2023, 8(9): 21191-21207. doi: 10.3934/math.20231080
    [3] Xiaofeng Wang, Yufan Yang, Yuping Qin . Semilocal convergence analysis of an eighth order iterative method for solving nonlinear systems. AIMS Mathematics, 2023, 8(9): 22371-22384. doi: 10.3934/math.20231141
    [4] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
    [5] Xiaofeng Wang, Ying Cao . A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function. AIMS Mathematics, 2023, 8(5): 12456-12471. doi: 10.3934/math.2023625
    [6] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [7] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [8] Xiaofeng Wang, Ning Shang . Semi-local convergence of Cordero's sixth-order method. AIMS Mathematics, 2024, 9(3): 5937-5950. doi: 10.3934/math.2024290
    [9] Cuiying Li, Rui Wu, Ranzhuo Ma . Existence of solutions for Caputo fractional iterative equations under several boundary value conditions. AIMS Mathematics, 2023, 8(1): 317-339. doi: 10.3934/math.2023015
    [10] Jia Yu, Xiaofeng Wang . A single parameter fourth-order Jarrat-type iterative method for solving nonlinear systems. AIMS Mathematics, 2025, 10(4): 7847-7863. doi: 10.3934/math.2025360
  • In this paper, we have constructed a family of three-step methods with sixth-order convergence and a novel approach to enhance the convergence order p of iterative methods for systems of nonlinear equations. Additionally, we propose a three-step scheme with convergence order p+3 (for p3) and have extended it to a generalized (m+2)-step scheme by merely incorporating one additional function evaluation, thus achieving convergence orders up to p+3m, mN. We also provide a thorough local convergence analysis in Banach spaces, including the convergence radius and uniqueness results, under the assumption of a Lipschitz-continuous Fréchet derivative. Theoretical findings have been validated through numerical experiments. Lastly, the performance of these methods is showcased through the analysis of their basins of attraction and their application to systems of nonlinear equations.



    The pursuit of constructing fixed-point iterative techniques for the solution of nonlinear equations or systems of nonlinear equations stands as a compelling and formidable challenge within the domains of numerical analysis and various applied sciences. The significance of this topic has led to the development of numerous numerical techniques, often employing iterative approaches, to yield highly precise approximate solutions for systems of nonlinear equations, represented as follows:

    Ω(s)=0, (1.1)

    where Ω:DXY, with X and Y as Banach spaces, is continuously Fréchet-differentiable; and D is a non-empty open convex subset of X.

    One of the most prevalent iterative approaches for determining the solution α of (1.1) is the well-established quadratically convergent one-point Newton method [1], which is outlined below:

    sn+1=snΩ(sn)1Ω(sn), n=0,1,2,..., (1.2)

    where Ω(sn)1 denotes the inverse of the first Fréchet derivative Ω(sn) of the function Ω(sn). The method's convergence relies on two crucial prerequisites. First, the initial approximation s0 must be in close proximity to the desired solution α, and second, the existence of the inverse Ω(sn)1 of the derivative must be ensured within the neighborhood D centered around α.

    In pursuit of achieving a higher order of convergence, scientific literature has introduced a range of modifications to Newton's method, referred to as Newton-like techniques. These strategies have been explored extensively in both univariate and multivariate scenarios, with comprehensive discussions available in References [2,3,4,5,6,7,8,9,10,11,12] and associated citations. One of the earliest and notable yet simple modifications to Newton's method is the cubically convergent Potra-Pták method (PPM3)[13], which is given below:

    yn=snΩ(sn)1Ω(sn),sn+1=ynΩ(sn)1Ω(yn). (1.3)

    However, achieving the enhanced convergence order in (1.3) involves incurring an additional computational cost in the form of an extra function evaluation, Ω(yn).

    An interesting new development in the field is the hybridization of iterative techniques and optimization algorithms. Several optimization algorithms, including the butterfly optimization algorithm [14] and the sperm swarm optimization algorithm [15], have been applied to solve problems involving systems of nonlinear equations. However, optimization algorithms often lack accurate solutions due to their limitations, including falling into local optima and divergence problems. Only a few researchers have attempted to combine iterative methods with optimization algorithms. Recently, Sihwail et al. [16] and Said Solaiman et al. [17] proposed new hybrid algorithms by combining iterative methods and optimization algorithms for the purpose of solving systems of nonlinear equations. These hybrid approaches leverage the benefits of both methods while overcoming their drawbacks.

    Aiming to contribute to this evolving landscape, we propose multipoint iterative techniques that progressively increase convergence orders while minimizing function evaluations and inverse operators. Initially, we introduce a family of three-step schemes with sixth order of convergence and then generalize it to a scheme of convergence order p+3. The first two steps are akin to Newton-like iterations with a convergence order of p (where p3). Building on this, we present a more generalized (m+2)-step scheme with an increased convergence order of p+3m, mN. In fact, we can achieve a threefold increase in convergence order by adding only one more function evaluation for each additional step.

    The subsequent sections of this paper are structured as follows. Section 2 introduces a three-step method and establishes its convergence, achieving a sixth order of convergence. Section 3 presents a generalized version of the family of methods with convergence order p+3, which is then further extended to an (m+2)-step scheme with a convergence order of p+3m, mN. Section 4 presents a comprehensive analysis of the local convergence properties. Section 5 offers numerical examples to validate our theoretical results. Again, Section 6 applies these methods to tackle systems of nonlinear equations. Section 7 presents a graphical analysis of the dynamical behaviours of our newly proposed methods, comparing them with existing methods through the lens of their basins of attraction on the Cartesian plane. Finally, Section 8 includes some concluding remarks.

    In this section, we aim to develop new families of iterative methods of order six for the purpose of solving systems of nonlinear equations. First, we present a three-step scheme as follows:

    yn=snΩ(sn)1Ω(sn),zn=yn12Ω(sn)1[Ω(sn)Ω(yn)]Ω(yn)1Ω(sn),sn+1=zn[k1I+k2Ω(yn)1Ω(sn)+k3Ω(sn)1Ω(yn)+k4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn), (2.1)

    where k1,k2,k3, and k4 are free parameters to be determined in the sequel.

    To obtain the convergence order of (2.1), we first recall the following result of the Taylor's expansion on vector functions (see [1]).

    Lemma 2.1. Let Ω:DRnRn be a p time Fréchet-differentiable function with D as a convex set; then, for any s,hRn, we have

    Ω(s+h)=Ω(s)+hΩ(s)+h22!Ω(s)++hp1(p1)!Ω(p1)(s)+Rp, (2.2)

    where Rp1p!sup0t1Ω(p)(s+th)hp and hp=(h,h,h,p,h).

    Theorem 2.1. Suppose that the function Ω:DRnRn is sufficiently Fréchet-differentiable in a neighborhood DRn which contains the root α of Ω(s)=0. Assuming that Ω(α) is nonsingular, the sequence {sn}n0(s0D) produced by the family of methods given by (2.1) converges to the actual solution α with a convergence order of 6 if k2=2+k14,k3=2k14, and k4=2k1,k1R. And, it satisfies the error equation given by

    εn+1=14C2C3(2(4+k1)C22+5C3)ε6n+O(ε7n). (2.3)

    Proof of Theorem 2.1. The expansion of Ω(sn) by using Taylor's series expansion given by (2.2) near s=α gives

    Ω(sn)=Ω(α)[εn+C2ε2n+C3ε3n+C4ε4n+C5ε5n+C6ε6n+O(ε7n)], (2.4)

    where Cj=1j!Ω(α)1Ω(j)(α), εn=snα, and εin=(εn,εn,i,εn), εnRn.

    Then, it is straightforward to obtain

    Ω(sn)=Ω(α)[I+2C2εn+3C3ε2n+4i=1(i+3)Ci+3εi+2n+O(ε7n)], (2.5)
    Ω(sn)1=Ω(α)1[I2C2εn+(4C223C3)ε2n+4i=1Kiεi+2n+O(ε7n)], (2.6)

    where Ki depends on C2, C3, ..., C7, i.e., K1=4(2C323C2C3+C4), K2=16C4236C22C3+9C23+16C2C45C5, etc.

    Using (2.4) and (2.6), we have

    ynα=C2ε2n+(2C32C22)ε3n+3i=1Miεi+3n+O(ε7n), (2.7)

    where Mi depends on C2,C3,...,C6, i.e., M1=4C32+3C47C2C3, M2=8C42+20C22C36C2310C2C4+4C5, etc.

    By using (2.2) and (2.7), the expansion of Ω(yn) gives

    Ω(yn)=Ω(α)[C2ε2n+2(C3C22)ε3n+3i=1Niεi+3n+O(ε7n)], (2.8)

    where Ni depends on C2,C3,...,C6. Then, from (2.8), it follows that

    Ω(yn)=Ω(α)[I+2C22ε2n+4C2(C3C22)ε3n+3i=1Piεi+3n+O(ε7n)], (2.9)

    where Pi depends on C2,C3,...,C6. Also,

    Ω(yn)1=Ω(α)1[I2C22ε2n+4i=1Qiεi+2n+O(ε7n)], (2.10)

    where Qi depends on C2,C3,...,C6.

    Now, replacing the values of (2.4)–(2.7), (2.9), and (2.10) in the second step of (2.1), we get

    znα=12C3ε3n+(C323C2C32+C4)ε4n+2i=1Riεi+4n+O(ε7n), (2.11)

    where Ri depends on C2,C3,...,C6. Then, by using (2.2), (2.11) becomes as follows:

    Ω(zn)=Ω(α)[12C3ε3n+(C323C2C32+C4)ε4n+2i=1Siεi+4n+O(ε7n)], (2.12)

    where Si depends on C2,C3,...,C6.

    Incorporating the values from (2.5), (2.6), and (2.9)–(2.12) in the concluding step of (2.1), we arrive at the error equation as follows:

    εn+1=14(2(1+k1+k2+k3)+k4)C3ε3n+3i=1Tiεi+3n+O(ε7n), (2.13)

    where Ti depends on C2,C3,...,C6, i.e.,

    T1=12((2(1+k1+k2+k3)+k4)+(3+3k1+k2+5k3+k4)C2C3(2(1+k1+k2+k3)+k4)C4), T2=18(4(8+8k1+4k2+12k3+3k4)C424(13k1+5(3+k2+5k3)+4k4)C22C3+3(8+8k1+4k2+12k3+3k4)C23+4(4+4k1+8k3+k4)C2C46(2(1+k1+k2+k3)+k4)C5), etc.

    Finally, by substituting k2=2+k14,k3=2k14, and k4=2k1 in the above error (2.13), we get

    εn+1=14C2C3(2(4+k1)C22+5C3)ε6n+O(ε7n).

    As a result, the proof has been successfully established.

    Now, upon substituting the values of the parameters k1,k2,k3, and k4, the proposed sixth-order family of methods derived from (2.1) can be formulated as follows:

    yn=snΩ(sn)1Ω(sn),zn=yn12Ω(sn)1[Ω(sn)Ω(yn)]Ω(yn)1Ω(sn),sn+1=zn[kI+(2+k4)Ω(yn)1Ω(sn)+(2k4)Ω(sn)1Ω(yn)2k(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn), (2.14)

    where the free parameter kR. We shall denote it by PFM6.

    Our objective here is to generalize the proposed family of methods given by (2.14) by establishing a universal principle that is capable of enhancing lower-order methods with convergence order p3. Through this approach, we aim to achieve an improvement of order p+3.

    The generalized method is characterized by the following construction:

    yn=snΩ(sn)1Ω(sn),zn=μp(sn,yn),sn+1=zn[k1I+k2Ω(yn)1Ω(sn)+k3Ω(sn)1Ω(yn)+k4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn). (3.1)

    Here, the parameters k1,k2,k3, and k4 will be determined later. It is worth noting that zn=μp(sn,yn) represents the iteration function with a convergence order of p3.

    Theorem 3.1. Suppose that Ω:DRnRn is a sufficiently Fréchet-differentiable function in a neighborhood DRn which contains the root α of Ω(s)=0. Assuming that Ω(α) is nonsingular, the sequence {sn} generated by method (3.1) for s0D converges to α with order p+3 for p3, provided that k2=2+k14,k3=2k14, and k4=2k1.

    Proof of Theorem 3.1. We consider all of the assumptions made in Theorem 2.1; using (2.5), (2.6), (2.9), and (2.10), we have

    [k1I+k2Ω(yn)1Ω(sn)+k3Ω(sn)1Ω(yn)+k4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1=[(k1+k2+k3+k42)I+(2k22k3+k42)C2εn+(2(k1+2k22k3+k4)C22+3(k2k3+k44)C3)ε2n+((4k18k2+52k4)C32(4k1+8k212k3+92k4)C2C3+(4k24k3+k4)C4)ε3n+O(ε4n)]Ω(α)1. (3.2)

    Furthermore, by considering the iteration function zn=μp(sn,yn) with convergence order of p, we can introduce the following definition:

    ˜εn=znα=O(εpn). (3.3)

    Then, using (2.2) and (3.3), the expansion of Ω(zn) about α is obtained as follows:

    Ω(zn)=Ω(α)[˜εn+O(˜ε2n)]. (3.4)

    Now, using (3.2)–(3.4) in the last step of (3.1), we obtain the error equation as follows:

    εn+1=(1k1k2k3k42)˜εn(2k22k3+k42)C2εn˜εn(2(k1+2k22k3+k4)C22+3(k2k3+k44)C3)ε2n˜εn((4k18k3+52k4)C32(4k1+8k212k3+92k4)C2C3+(4k24k3+k4)C4)ε3n˜εn+O(ε4n˜εn)+O(˜ε2n). (3.5)

    For p3, the method (3.1) exhibits convergence to the root α with an order of p+3 if and only if the constants k1,k2, and k3 satisfy the following system:

    1k1k2k312k4=0,2k22k3+12k4=0,k1+2k22k3+k4=0,k2k3+14k4=0. (3.6)

    The only solution of the above system (3.6) is as follows: k2=2+k14,k3=2k14, and k4=2k1. By substituting k1=k, the error Eq (3.5) reduces to

    εn+1=((k4)C32+2C2C3)εp+3n+O(εp+4n). (3.7)

    Hence, the theorem is proved.

    Here, it is worth highlighting that, based on the proof of Theorem 3.1, we can readily derive the following significant results:

    Corollary 1. When considering a special case of k=k1=0, solving the system (3.6) yields that k2=k3=12. Thus, the proposed approach given by (3.1) is reduced to the following construction given by

    sn+1=zn12[Ω(yn)1Ω(sn)+Ω(sn)1Ω(yn)]Ω(yn)1Ω(zn). (3.8)

    where yn,zn are defined as given in (3.1). In fact, the above construction (3.8) was the technique introduced by the authors of [18] in the year 2018.

    Remark 1. A particular case of the above approach given by (3.8) is the following sixth-order method developed by Sharma et al. [19] in the year 2019. We shall call it the Sharma-Sharma-Karla method (SSKM6).

    yn=snΩ(sn)1Ω(sn),zn=sn12[Ω(sn)1+Ω(yn)1]Ω(sn),sn+1=zn12[Ω(sn)1+Ω(yn)1Ω(sn)Ω(yn)1]Ω(zn). (3.9)

    Corollary 2. When considering the special case of k=k1=2, solving the system (3.6) yields that k2=1,k3=0, and k4=4. Thus, we obtain the following approach given by

    sn+1=zn[2I+Ω(yn)1Ω(sn)4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn). (3.10)

    Also, by considering the special case of k=k1=2, solving the system (3.6) yields that k2=0, k3=1, and k4=4. Then, we obtain the approach given by

    sn+1=zn[2I+Ω(sn)1Ω(yn)+4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn). (3.11)

    Remark 2. Applying (3.10) to the third-order method of Liu et al. [20], developed in 2016, gives the following new sixth-order method, which is denoted by PFM6.1:

    yn=snΩ(sn)1Ω(sn),zn=sn2[Ω(unvn)+Ω(un+vn)]1Ω(sn),sn+1=zn[2I+Ω(yn)1Ω(sn)4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn), (3.12)

    where un=sn+yn2 and vn=ynsn23.

    Also, applying (3.11) to the third-order method of Sharma and Gupta [21], developed in 2013, gives the following new sixth-order method denoted by PFM6.2.

    yn=snΩ(sn)1Ω(sn),zn=sn12(3IΩ(sn)1Ω(yn))Ω(sn)1Ω(sn),sn+1=zn[2I+Ω(sn)1Ω(yn)+4(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn). (3.13)

    The extended form of the method given by (3.1), involving (m+2) steps, can be represented as:

    yn=snΩ(sn)1Ω(sn),zn=μp(sn,yn),z(1)n=znΨ(sn,yn)Ω(zn),z(2)n=z(1)nΨ(sn,yn)Ω(z(1)n),........................................z(m1)n=z(m2)nΨ(sn,yn)Ω(z(m2)n),z(m)n=sn+1=z(m1)nΨ(sn,yn)Ω(z(m1)n), (3.14)

    where mN, z(0)n=zn and Ψ(sn,yn)=(kI+(2+k4)Ω(yn)1Ω(sn)+(2k4)Ω(sn)1Ω(yn)2k(Ω(sn)+Ω(yn))1Ω(sn))Ω(yn)1.

    Theorem 3.2. Given the assumptions of Theorem 3.1, the sequence {sn} generated by employing the method given by (3.14) with an initial value of s0D achieves convergence toward α with a convergence order of p+3m for cases in which p3 and mN.

    Proof of Theorem 3.2. Considering the expression given in (3.2), it follows that

    Ψ(sn,yn)=(I+((k4)C32+2C2C3)ε3n+)Ω(α)1. (3.15)

    Utilizing Taylor's series, we can express the expansion of Ω(z(m1)n) around α as follows:

    Ω(z(m1)n)=Ω(α)((z(m1)nα)+C2(z(m1)nα)2+). (3.16)

    Then, it follows from (3.15) and (3.16) that

    Ψ(sn,yn)Ω(z(m1)n)=(z(m1)nα)+((k4)C32+2C2C3)ε3n(z(m1)nα)+C2(z(m1)nα)2+. (3.17)

    By applying (3.17) in the concluding step of (3.14), we acquire the following:

    z(m)nα=((4k)C322C2C3)ε3n(z(m1)nα)C2(z(m1)nα)2+. (3.18)

    Referring to (3.7), it is evident that z(1)nα=((4k)C322C2C3)εp+3n+O(εp+4n). Consequently, applying (3.18) with m=2,3, we derive the following:

    z(2)nα=((4k)C322C2C3)ε3n(z(1)nα)+=((4k)C322C2C3)2εp+6n+O(εp+7n). (3.19)
    z(3)nα=((4k)C322C2C3)ε3n(z(2)nα)+=((4k)C322C2C3)3εp+9n+O(εp+10n). (3.20)

    Continuing by the process of induction, we obtain

    z(m)nα=((4k)C322C2C3)mεp+3mn+O(εp+3m+1n). (3.21)

    This concludes the proof of Theorem 3.2.

    Remark 3. Applying the construction given by (3.14) for m=2 to the Potra-Pták method described by (1.3) [13] of third order yields the following newly extended family of ninth-order method (PFM9).

    yn=snΩ(sn)1Ω(sn),zn=ynΩ(sn)1Ω(yn),ˆzn=zn[kI+(2+k4)Ω(yn)1Ω(sn)+(2k4)Ω(sn)1Ω(yn)2k(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(zn),sn+1=ˆzn[kI+(2+k4)Ω(yn)1Ω(sn)+(2k4)Ω(sn)1Ω(yn)2k(Ω(sn)+Ω(yn))1Ω(sn)]Ω(yn)1Ω(ˆzn). (3.22)

    In this section, we extend the local convergence analysis of (2.14) (PFM6) from Section 2 to the Banach space setting. To analyze this local convergence under the Lipschitz continuity condition, we make the following assumptions on all real numbers κ0>0, κ>0, and all points s,yD:

    Ω(α)=0,Ω(α)1L(Y,X),Ω(α)1(Ω(s)Ω(α))κ0sα, (4.1)
    Ω(α)1(Ω(s)Ω(y))κsy, (4.2)

    where L(Y,X) represents the set of linear bounded operators from Y to X.

    Lemma 4.1. Under the assumption that operator Ω satisfies conditions (4.1) and (4.2), the following inequalities hold for all points sD:

    Ω(α)1Ω(s)1+κ0sα, (4.3)
    Ω(α)1Ω(s)(1+κ0sα)sα. (4.4)

    Proof of Lemma 4.1. By applying condition (4.1), we obtain:

    Ω(α)1Ω(s)=Ω(α)1(Ω(s)Ω(α))+I1+Ω(α)1(Ω(s)Ω(α))1+κ0sα.

    Again, by virtue of the mean value theorem, it follows that:

    Ω(α)1Ω(s)=Ω(α)1(Ω(s)Ω(α))(1+κ0sα)sα.

    The following Theorem 4.1 provides the local convergence analysis of the considered scheme (2.14) (PFM6) under the Lipschitz continuity condition.

    Theorem 4.1. Let Ω:DXY be a Fréchet differentiable operator. Suppose that there exist αD and k(,+) such that conditions (4.1) and (4.2) are satisfied and B(α,ρ)D, where ρ is to be determined. Then, the iterative sequence {sn} produced by PFM6 (2.14), starting from s0B(α,ρ), remains in B(α,ρ) n0, and converges to α. Futhermore, the following inequalities hold for all n0:

    ynαη1(snα)snα<snα<ρ, (4.5)
    znαη2(snα)snα<snα<ρ, (4.6)
    sn+1αη3(snα)snα<snα<ρ, (4.7)

    where the functions represented by ηi are to be defined. Additionally, if there exists R[ρ,1κ0) such that B(α,R)D, then the limit point α is the unique solution in B(α,R).

    Proof of Theorem 4.1. Given that s0D, and by assuming that s0α<1κ0, (4.1) gives

    Ω(α)1(Ω(s0)Ω(α))κ0s0α<1.

    Consequently, by applying the Banach lemma on invertible operators, Ω(s0)1 exists, and it follows that

    Ω(s0)1Ω(α)11κ0s0α. (4.8)

    Therefore, y0 is well defined. Now, considering the first sub-step of (2.14) for n=0, we obtain

    y0α=s0Ω(s0)1Ω(s0)=Ω(s0)1Ω(α)10Ω(α)1[Ω(s0)Ω(α+t(s0α))](s0α)dt.

    Taking the norm on both sides and using (4.2) and (4.8), we obtain

    y0αΩ(s0)1Ω(α)10Ω(α)1[Ω(s0)Ω(α+t(s0α))](s0α)dtκs0α2(1κ0s0α)s0α=η1(s0α)s0α, (4.9)

    where

    η1(ϑ)=κϑ2(1κ0ϑ).

    We define the function ω1(ϑ)=η1(ϑ)1. Since ω1(0)<0 and ω1(1κ0)+, the intermediate value theorem guarantees that at least one root of ω1(ϑ) exists in the interval (0,1κ0). Let ρ1 represent the smallest root of ω1(ϑ) within this interval. Then, we obtain

    0<ρ1<1κ0, and 0η1(ϑ)<1, ϑ[0,ρ1). (4.10)

    Applying (4.9) and (4.10), we arrive at the following result:

    y0αη1(s0α)s0α<s0α.

    Since y0D, by using the assumption (4.1), we can deduce that

    Ω(α)1(Ω(y0)Ω(α))κ0y0ακ0s0α<1.

    As a result, by virtue of the Banach lemma about invertible operators, Ω(y0)1 exists and

    Ω(y0)1Ω(α)11κ0y0α. (4.11)

    Hence, z0 is well-defined. As such, from (2.14) for n=0, we have

    z0αy0α+12Ω(s0)1(Ω(s0)Ω(y0))Ω(y0)1Ω(s0)y0α+12Ω(s0)1Ω(α)[Ω(α)1(Ω(s0)Ω(α))+Ω(α)1(Ω(y0)Ω(α))]Ω(y0)1Ω(α)Ω(α)1Ω(s0)[η1(s0α)+12κ0(1+η1(s0α))s0α1κ0s0α1+κ0s0α1κ0η1(s0α)s0α]s0α=η2(s0α)s0α, (4.12)

    where

    η2(ϑ)=η1(ϑ)+12κ0(1+η1(ϑ))(1+κ0ϑ)(1κ0ϑ)(1κ0η1(ϑ)ϑ)ϑ.

    We define the function ω2(ϑ)=η2(ϑ)1. Clearly, there is at least one root of ω2(ϑ) in the interval (0,ρ1) since ω2(0)<0 and ω2(ρ1)>0. Let ρ2 denote the smallest root of ω2(ϑ) within this interval. Then, we obtain

    0<ρ2<ρ1, and 0η2(ϑ)<1, ϑ[0,ρ2). (4.13)

    Now, applying (4.12) and (4.13), we arrive at the following result:

    z0αη2(s0α)s0α<s0α.

    Moreover, by using (4.1) and (4.10), we have

    (2Ω(α))1[(Ω(s0)Ω(α))+(Ω(y0)Ω(α))]12[Ω(α)1(Ω(s0)Ω(α))+Ω(α)1(Ω(y0)Ω(α))]κ02[1+κs0α2(1κ0s0α)]s0αϕ(s0α)s0α<1.

    Consequently, by applying the Banach lemma on invertible operators, (Ω(s0)+Ω(y0))1 exists and

    (Ω(s0)+Ω(y0))1Ω(α)12(1ϕ(s0α)s0α). (4.14)

    Accordingly, s1 is well-defined. As such, from (2.14) with n=0, we have

    s1αz0α+k+(2+k4)Ω(y0)1Ω(s0)+(2k4)Ω(s0)1Ω(y0)2k(Ω(s0)+Ω(y0))1Ω(s0)Ω(y0)1Ω(z0)z0α+[k(Ω(s0)+Ω(y0))1Ω(α)(Ω(α)1(Ω(y0)Ω(α))+Ω(α)1(Ω(s0)Ω(α)))+(2+k4)Ω(y0)1Ω(α)Ω(α)1Ω(s0)+(2k4)Ω(s0)1Ω(α)Ω(α)1Ω(y0)]Ω(y0)1Ω(α)Ω(α)1Ω(z0)z0α+[|k|κ0y0α+κ0s0α2(1ϕ(s0α)s0α)+|2+k4|1+κ0s0α1κ0y0α+|2k4|1+κ0y0α1κ0s0α](1+κ0z0α)z0α1κ0y0α[1+(|k|κ0s0α(1+η1(s0α))2(1ϕ(s0α)s0α)+|2+k4|1+κ0s0α1κ0η1(s0α)s0α+|2k4|1+κ0η1(s0α)s0α1κ0s0α)1+κ0η2(s0α)s0α1κ0η1(s0α)s0α]η2(s0α)s0α=η3(s0α)s0α, (4.15)

    where

    η3(ϑ)=[1+(|k|κ0ϑ(1+η1(ϑ))2(1ϕ(ϑ)ϑ)+|2+k4|1+κ0ϑ1κ0η1(ϑ)ϑ+|2k4|1+κ0η1(ϑ)ϑ1κ0ϑ)1+κ0η2(ϑ)ϑ1κ0η1(ϑ)ϑ]η2(ϑ).

    Let us define the function ω3(ϑ)=η3(ϑ)1. It is evident that, with ω3(0)<0 and ω3(ρ2)>0, there exists at least one root of ω3(ϑ) in the interval (0,ρ2). Let ρ represent the smallest root of ω3(ϑ) within this interval. Then, we obtain

    ρ<ρ2<ρ1<1κ0, and 0η3(ϑ)<1, ϑ[0,ρ). (4.16)

    Now, applying (4.15) and (4.16), we arrive at the following result:

    s1αη3(s0α)s0α<s0α<ρ.

    It follows that the theorem is valid for n=0. By repeating the computations above with sn, yn, zn, sn+1 respectively replacing s0, y0, z0, s1, the induction will be completed and we can establish the inequalities detailed in (4.5)–(4.7). Additionally, based on the estimate sn+1αsnα<ρ, we conclude that sn+1B(α,ρ). Evidently, the function η3 is increasing in its domain; so, we have

    sn+1αη3(ϑ)snαη3(ϑ)η3(sn1α)sn1αη3(ϑ)2η3(sn2α)sn2αη3(ϑ)n+1s0α.

    Using limnη3(ϑ)n+1=0, we obtained that limnsn=α; hence, the method tends to the solution.

    To establish the uniqueness aspect, consider that αB(α,ρ), where αα and satisfies the condition that Ω(α)=0. Let T=10Ω(α+ϑ(αα))dϑ. Then, by using (4.1), we have

    Ω(α)1(TΩ(α))10κ0α+ϑ(αα)αdϑκ02αα=κ02R<1.

    Hence, T1 exists, and by utilizing the following identity:

    0=Ω(α)Ω(α=T(αα), (4.17)

    we deduce that α=α.

    In this section, we will use a series of numerical examples to show how well our local convergence analysis works for the proposed method PFM6 (2.14).

    To begin with, we will determine the radius of convergence for each example and subsequently compare our method with several established alternatives found in the literature. Specifically, we will focus on two sixth-order convergence schemes (schemes (1.2) and (1.3) from [22]) and one fifth-order convergence scheme (scheme from [23]). We will refer to these schemes as M1, M2, and M3, respectively.

    Example 1. [24] Let us consider τ defined over the interval D=[12,52] by

    τ(s)={s3logs2+s5s4,s0,0,s=0. (5.1)

    The zero of τ is α=1. Also, we have

    τ(s)=3s2logs2+5s44s3+2s2,τ(s)=6slogs2+20s312s2+10s,τ(s)=6logs2+60s224s+22.

    Although the third derivative of τ is unbounded on D, the iterative method given by (2.14) still converges, according to Theorem 4.1 with α=1. We found that κ0=κ=96.662907. By setting k=0.1, we calculate the radius of convergence as follows:

    ρ=0.00249028<ρ2=0.0040526<ρ1=0.00689682.

    The comparison results for the radii of convergence are displayed in Table 1.

    Table 1.  Comparison of convergence radii.
    Examples M1 M2 M3 PFM6 (2.14)
    1 0.001141 0.001215 0.000916634 0.00249028
    2 0.064030 0.068190 0.0500153 0.13843
    3 0.013823 0.014756 0.0067881 0.0251314

     | Show Table
    DownLoad: CSV

    Example 2. [24] Consider a system of differential equations governing the dynamics of an object. These equations are given as follows:

    τ1(ω1)=eω1,τ2(ω2)=(e1)ω2+1,τ3(ω3)=1,

    with the initial conditions τ1(0)=τ2(0)=τ3(0)=0. These equations can be collectively represented as the vector τ=(τ1,τ2,τ3). Let X=Y=R3 and D=ˉB(0,1). Define τ on D for ν=(ω1,ω2,ω3)T by

    τ(ν)=(eω11,e12ω22+ω2,ω3)T.

    The Fréchet derivative is given by

    τ(ν)=(eω1000(e1)ω2+10001).

    For α=(0,0,0)T, τ(α)=τ(α)1=diag{1,1,1}, and κ0=e1, κ=e1e1, and taking k=0.1, we get

    ρ=0.13843<ρ2=0.225675<ρ1=0.382692.

    The comparison results for the radii of convergence are displayed in Table 1.

    Example 3. [24] (Also, see [23] for details) Let X=Y=C[0,1] and the space of continuous functions defined on [0,1] be equipped with the max norm, and let D=B(0,1). Consider the nonlinear integral equation of the Hammerstein type τ defined on D by

    τ(ν)(s)=ν(s)510suν(u)3du,

    with ν(s)C[0,1]. The first derivative of τ is given by

    τ(ν(φ))(s)=φ(s)1510suν(u)2φ(u)du, for each φD.

    For the solution α=0, we obtain that κ0=7.5 and κ=15. Then, using the iterative method given by (2.14) for k=0.1, we get the radii of convergence as follows:

    ρ=0.0251314<ρ2=0424972<ρ1=0.0666667.

    The comparison results for the radii of convergence are displayed in Table 1.

    As evident from the data presented in Table 1, the proposed family of methods given by PFM6 (2.14) exhibits a significantly broad convergence radius. Moreover, across all three examples, it is consistently observed that PFM6 (2.14) outperforms the other three methods by exhibiting a notably larger convergence radius.

    In this section, we apply the proposed method PFM6 (2.14), along with PFM6.1 (3.12), PFM6.2 (3.13), and PFM9 (3.22) to solve systems of nonlinear equations in Rn; we also compare their performance to the existing methods given by PPM3 (1.3) and SSKM6 (3.9). Also, the following two methods are considered for the comparison.

    The sixth-order method established by Lotfi et al. in [25] (TLM6):

    yn=snΩ(sn)1Ω(sn),zn=sn2(Ω(sn)+Ω(yn))1Ω(sn),sn+1=zn[72I4Ω(sn)1Ω(yn)+32(Ω(sn)1Ω(yn))2]Ω(sn)1Ω(zn). (6.1)

    The ninth-order method established by Lotfi et al. in [25] (TLM9):

    yn=snΩ(sn)1Ω(sn),zn=sn2(Ω(sn)+Ω(yn))1Ω(sn),ˆzn=zn[72I4Ω(sn)1Ω(yn)+32(Ω(sn)1Ω(yn))2]Ω(sn)1Ω(zn),sn+1=ˆzn[72I4Ω(sn)1Ω(yn)+32(Ω(sn)1Ω(yn))2]Ω(sn)1Ω(ˆzn). (6.2)

    All computations were performed in the Mathematica 12.2 programming package by using multiple-precision arithmetic with 1000 significant digits. For each problem, we recorded the number of iterations (n) needed to converge to the root such that the stopping criterion Ω(sn)<10100 is satisfied. Numerical tests were conducted for the following set of problems:

    Problem 1. A nonlinear system with two unknowns is given by

    Ω1(s)=(s1+es21,3s1+coss22)T.

    By taking s0=(1,1)T as the initial approximation, we arrive at the root:

    α=(0.367758471822148,0.458483793003288)T.

    Problem 2. A nonlinear system with three unknowns is given by

    Ω2(s)=(s1+s2es3,s1es2+s3,es1+s2+s3)T.

    By choosing s0=(1,1,1)T as the initial approximation, we get the root:

    α=(0.351733711249196,0.351733711249196,0.351733711249196)T.

    Table 2 provides a comprehensive overview of the comparison results for the various methods applied to the test problems. The table shows the residual error (i.e., Ω(sn)), the error in the consecutive iterations snsn1, and the approximated computational order of convergence (COC) after the methods satisfy the stopping criterion. The COC is calculated as follows [26]:

    COClog(snsn1/sn1sn2)log(sn1sn2/sn2sn3). (6.3)
    Table 2.  Comparison of the methods on the test Problems 1 and 2.
    Method Ω(sn) n Ω(sn) snsn1 μ COC
    PPM3 Ω1(sn) 7 1.3604×10256 5.2810×1086 1.6899 3.000
    SSKM6 Ω1(sn) 4 1.6132×10218 6.7182×1037 0.3041 6.000
    TLM6 Ω1(sn) 5 8.0714×10436 6.3215×1074 4020.6 6.000
    TLM9 Ω1(sn) 4 2.1109×10355 8.4589×1041 6.1887×106 9.036
    PFM6 (k=3) Ω1(sn) 4 2.0440×10268 4.7518×1045 0.0238 6.024
    PFM6.1 Ω1(sn) 4 1.0271×10219 4.2262×1037 0.2486 6.015
    PFM6.2 Ω1(sn) 4 8.3154×10147 3.8408×1025 4.4963 6.000
    PFM9 (k=1) Ω1(sn) 4 1.1198×10876 4.9987×1098 0.9799 9.000
    PPM3 Ω2(sn) 5 1.6768×10238 1.7648×1079 1.1285×102 3.000
    SSKM6 Ω2(sn) 3 5.3603×10215 1.1794×1035 7.3680×106 6.000
    TLM6 Ω2(sn) 4 1.8557×10140 2.0466×1035 1.7105×1015 4.000
    TLM9 Ω2(sn) 4 7.7971×10231 1.1938×1046 1.1856×1035 5.000
    PFM6 (k=3) Ω2(sn) 3 2.7939×10252 9.0979×1042 1.8225×106 6.000
    PFM6.1 Ω2(sn) 3 6.9583×10217 5.4463×1036 9.8625×106 6.000
    PFM6.2 Ω2(sn) 3 3.7879×10196 1.3967×1032 1.8871×105 6.000
    PFM9 (k=1) Ω2(sn) 3 4.0965×10706 2.7339×1078 1.7758×108 9.000

     | Show Table
    DownLoad: CSV

    Moreover, the order of convergence can be confirmed through the analysis of the asymptotic behavior of the convergence rate, i.e., the asymptotic error constant (μ), by using the following formula [26]:

    μ=snsn1sn1sn2p,for p=3,6,9. (6.4)

    Furthermore, we provide in Table 3 a comparison of the CPU time (measured in seconds) required by each method to meet the stopping criterion. We averaged the CPU running time over three trials to improve accuracy. All computations were carried out on a computer equipped with an Intel(R) Core(TM) i5-10210U CPU @ 2.11 GHz and 8 GB of RAM running Windows 11.

    Table 3.  CPU time comparison for the methods on Problems 1 and 2.
    Method Ω1(s) Ω2(s)
    PPM3 0.02144 0.02281
    SSKM6 0.03867 0.02352
    TLM6 0.02836 0.05682
    TLM9 0.02969 0.07172
    PFM6 0.01791 0.02568
    PFM6.1 0.02076 0.02128
    PFM6.2 0.02185 0.07753
    PFM9 0.02490 0.03720

     | Show Table
    DownLoad: CSV

    Our numerical results, as presented in Tables 2 and 3, demonstrate that the proposed methods exhibit highly competitive performance. They converge rapidly toward the root in the smallest number of iterations (n), achieve better accuracy in terms of minimal residual error (Ω(sn)) and the error in the consecutive iterations (snsn1), and consume less CPU time than well-known existing methods. Additionally, the computed COC aligns with the theoretical convergence order for the newly proposed methods.

    In this section, we offer a graphical analysis of our newly proposed scheme, PFM6 (k=0.001) (2.14), along with PFM6.1 (3.12), PFM6.2 (3.13), and PFM9 (k=0.001) (3.22), against the established methods of PPM3 (1.3), SSKM6 (3.9), TLM6 (6.1), and TLM9 (6.2). This comparison is facilitated by an analysis of their dynamical behaviors on the Cartesian plane, specifically through the lens of their basins of attraction. These basins not only serve as a visual comparative tool, they also shed light on the convergence and stability attributes of each method. We focus on the following systems of nonlinear polynomial equations for the analysis:

    (i) P1(s)=(s211,s221)T.

    (ii) P2(s)=(s21+s221,14s21+4s221)T.

    (iii) P3(s)=(3s21s2s32,s313s1s221)T.

    For this graphical analysis, we employed a precisely defined rectangular grid S=[2,2]×[2,2]R2 within the Cartesian plane that has been subdivided into a 401×401 matrix of grid points. Each point on this grid serves as an initial point for iterations. These points were each assigned a unique color to indicate the specific real root to which the iterative method will converge when initialized from that point. The roots are visually demarcated by small white circles in the plot. Points that fail to converge within the set tolerance of 103 or before reaching a maximum of 80 iterations have been marked in black, denoting them as divergent points. Additionally, the brightness of the colour within each basin serves as an indicator of the speed of convergence, with brighter hues representing faster convergence and darker shades indicating a slower rate. The graphical illustrations of the basins of attraction for the methods under evaluation are displayed in Figures 13. To supplement this, Table 4 enumerates the instances of divergence, sourced from a 401×401 matrix of initial points, when the methods are applied to Pi(s),i=1,2,3.

    Figure 1.  Basins of attraction on P1(s).
    Figure 2.  Basins of attraction on P2(s).
    Figure 3.  Basins of attraction on P3(s).
    Table 4.  Number of divergent points for the compared methods when applied to P1(s), P2(s) and P3(s).
    P(s) PPM3 SSKM6 TLM6 TLM9 PFM6 PFM6.1 PFM6.2 PFM9
    P1(s) 17 1 5425 37081 1 1 17 1601
    P2(s) 7 1 3593 25057 1 1 5 875
    P3(s) 77 20 44 5974 1 7685 18 4473

     | Show Table
    DownLoad: CSV

    Figures 13 reveal key insights into the performance of the methods based on their basins of attraction. Our proposed methods exhibit strong performance with large basins, demonstrating their robustness by yielding a notably low count of divergent points. PFM6 emerges as the top performer, with SSKM6 following closely behind and offering a strong challenge to our proposed approach. On the other hand, PFM9 and TLM9 underperform, indicating that higher order does not guarantee better convergence or stability, as evidenced by their small basins and large number of divergent points. Intriguingly, while PFM6.1 excels and matches the performance of PFM6 and SSKM6 for P1(s) and P2(s), it struggles significantly with P3(s), lagging behind the other methods and delivering the poorest performance in this specific scenario. These observations are corroborated by the divergent point data in Table 4.

    In this paper, we have presented a family of three-step iterative methods with sixth-order convergence for the purpose of solving systems of nonlinear equations. The proposed methods are based on a novel approach to enhance the convergence order of iterative methods. We have also proposed a three-step scheme with convergence order p+3 (for p3) and extended it to a generalized (m+2)-step scheme by merely incorporating one additional function evaluation, thus achieving convergence orders up to p+3m, mN. We have provided thorough local convergence analysis and numerical experiments to validate the theoretical findings. Lastly, we have showcased the performance of these methods through the analysis of their basins of attraction and their application to systems of nonlinear equations.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors gratefully acknowledge the University Grants Commission (UGC), New Delhi, India, for providing financial assistance to carry out this work. Open access of this article is supported by the Technical University of Cluj-Napoca.

    The authors declare no conflict of interest.



    [1] J. Ortega, W. Rheinboldt, Iterative solution of nonlinear equations in several variables, Philadelphia: Society for Industrial and Applied Mathematics, 2000. http://dx.doi.org/10.1137/1.9780898719468
    [2] O. Ogbereyivwe, K. Muka, Multistep quadrature based methods for nonlinear system of equations with singular jacobian, Journal of Applied Mathematics and Physics, 7 (2019), 702–725. http://dx.doi.org/10.4236/jamp.2019.73049 doi: 10.4236/jamp.2019.73049
    [3] H. Abro, M. Shaikh, A new family of twentieth order convergent methods with applications to nonlinear systems in engineering, Mehran Univ. Res. J. Eng., 42 (2023), 165–176. http://dx.doi.org/10.22581/muet1982.2301.15 doi: 10.22581/muet1982.2301.15
    [4] R. Behl, P. Maroju, S. Motsa, Efficient family of sixth-order methods for nonlinear models with its dynamics, Int. J. Comp. Meth., 16 (2019), 1840008. http://dx.doi.org/10.1142/S021987621840008X doi: 10.1142/S021987621840008X
    [5] G. Thangkhenpau, S. Panday, L. Bolunduţ, L. Jäntschi, Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations, Symmetry, 15 (2023), 1546. http://dx.doi.org/10.3390/sym15081546 doi: 10.3390/sym15081546
    [6] I. Argyros, D. Sharma, C. Argyros, S. Parhi, S. Sunanda, Extended iterative schemes based on decomposition for nonlinear models, J. Appl. Math. Comput., 68 (2022), 1485–1504. http://dx.doi.org/10.1007/s12190-021-01570-5 doi: 10.1007/s12190-021-01570-5
    [7] H. Wang, S. Li, A family of derivative-free methods for nonlinear equations, Rev. Mat. Complut., 24 (2011), 375–389. http://dx.doi.org/10.1007/s13163-010-0044-5 doi: 10.1007/s13163-010-0044-5
    [8] G. Thangkhenpau, S. Panday, Efficient families of multipoint iterative methods for solving nonlinear equations, Eng. Let., 31 (2023), 574–583.
    [9] S. Kumar, J. Bhagwan, L. Jäntschi, Optimal derivative-free one-point algorithms for computing multiple zeros of nonlinear equations, Symmetry, 14 (2022), 1881. http://dx.doi.org/10.3390/sym14091881 doi: 10.3390/sym14091881
    [10] A. Singh, J. Jaiswal, Several new third-order and fourth-order iterative methods for solving nonlinear equations, Int. J. Eng. Math., 2014 (2014), 828409. http://dx.doi.org/10.1155/2014/828409 doi: 10.1155/2014/828409
    [11] G. Thangkhenpau, S. Panday, S. Mittal, L. Jäntschi, Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations, Mathematics, 11 (2023), 2036. http://dx.doi.org/10.3390/math11092036 doi: 10.3390/math11092036
    [12] M. Dehghan, A. Shirilord, Three-step iterative methods for numerical solution of systems of nonlinear equations, Eng. Comput., 38 (2022), 1015–1028. http://dx.doi.org/10.1007/s00366-020-01072-1 doi: 10.1007/s00366-020-01072-1
    [13] F. Potra, V. Pták, Nondiscrete induction and iterative processes, London: Pitman Advanced Pub. Program, 1984.
    [14] S. Arora, S. Singh, Butterfly optimization algorithm: a novel approach for global optimization, Soft Comput., 23 (2019), 715–734. http://dx.doi.org/10.1007/s00500-018-3102-4 doi: 10.1007/s00500-018-3102-4
    [15] H. Shehadeh, I. Ahmedy, M. Idris, Empirical study of sperm swarm optimization algorithm, In: Intelligent systems and applications, Cham: Springer, 2018, 1082–1104. http://dx.doi.org/10.1007/978-3-030-01057-7_80
    [16] R. Sihwail, O. Said Solaiman, K. Zainol Ariffin, New robust hybrid jarratt-butterfly optimization algorithm for nonlinear models, J. King Saud Univ.-Com., 34 (2022), 8207–8220. http://dx.doi.org/10.1016/j.jksuci.2022.08.004 doi: 10.1016/j.jksuci.2022.08.004
    [17] O. Said Solaiman, R. Sihwail, H. Shehadeh, I. Hashim, K. Alieyan, Hybrid newton–sperm swarm optimization algorithm for nonlinear systems, Mathematics, 11 (2023), 1473. http://dx.doi.org/10.3390/math11061473 doi: 10.3390/math11061473
    [18] X. Xiao, H. Yin, Accelerating the convergence speed of iterative methods for solving nonlinear systems, Appl. Math. Comput., 333 (2018), 8–19. http://dx.doi.org/10.1016/j.amc.2018.03.108 doi: 10.1016/j.amc.2018.03.108
    [19] R. Sharma, J. Sharma, N. Kalra, A modified newton–Özban composition for solving nonlinear systems, Int. J. Comp. Meth., 17 (2020), 1950047. http://dx.doi.org/10.1142/S0219876219500476 doi: 10.1142/S0219876219500476
    [20] Z. Liu, Q. Zheng, C. Huang, Third- and fifth-order newton–gauss methods for solving nonlinear equations with n variables, Appl. Math. Comput., 290 (2016), 250–257. http://dx.doi.org/10.1016/j.amc.2016.06.010 doi: 10.1016/j.amc.2016.06.010
    [21] J. Sharma, P. Gupta, On some efficient techniques for solving systems of nonlinear equations, Advances in Numerical Analysis, 2013 (2013), 252798. http://dx.doi.org/10.1155/2013/252798 doi: 10.1155/2013/252798
    [22] I. Argyros, D. Sharma, C. Argyros, Extended efficient high convergence order schemes for equations, Applicationes Mathematicae, in press. http://dx.doi.org/10.4064/am2444-2-2023
    [23] B. Panday, J. Jaiswal, On the local convergence of modified homeier-like method in banach spaces, Numer. Analys. Appl., 11 (2018), 332–345. http://dx.doi.org/10.1134/S1995423918040067 doi: 10.1134/S1995423918040067
    [24] I. Argyros, S. George, Ball analysis for an efficient sixth convergence order scheme under weaker conditions, Advances in the Theory of Nonlinear Analysis, 5 (2021), 445–453. http://dx.doi.org/10.31197/atnaa.746959 doi: 10.31197/atnaa.746959
    [25] T. Lotfi, P. Bakhtiari, A. Cordero, K. Mahdiani, J. Torregrosa, Some new efficient multipoint iterative methods for solving nonlinear systems of equations, Int. J. Comput. Math., 92 (2015), 1921–1934. http://dx.doi.org/10.1080/00207160.2014.946412 doi: 10.1080/00207160.2014.946412
    [26] A. Cordero, J. Torregrosa, Variants of newton's method using fifth-order quadrature formulas, Appl. Math. Comput., 190 (2007), 686–698. http://dx.doi.org/10.1016/j.amc.2007.01.062 doi: 10.1016/j.amc.2007.01.062
  • This article has been cited by:

    1. S. Ida Evangeline, S. Darwin, E. Fantin Irudaya Raj, A deep residual neural network model for synchronous motor fault diagnostics, 2024, 160, 15684946, 111683, 10.1016/j.asoc.2024.111683
    2. Mudassir Shams, Bruno Carpentieri, A High-Order Numerical Scheme for Efficiently Solving Nonlinear Vectorial Problems in Engineering Applications, 2024, 12, 2227-7390, 2357, 10.3390/math12152357
    3. Alicia Cordero, Juan R. Torregrosa, Paula Triguero-Navarro, First optimal vectorial eighth-order iterative scheme for solving non-linear systems, 2025, 498, 00963003, 129401, 10.1016/j.amc.2025.129401
    4. G. Thangkhenpau, Sunil Panday, A sixth-order bi-parametric iterative method for nonlinear systems: Theory, stability and computational complexity, 2025, 0885064X, 101960, 10.1016/j.jco.2025.101960
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1860) PDF downloads(94) Cited by(4)

Figures and Tables

Figures(3)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog