Research article

Algorithms of predictor-corrector type with convergence and stability analysis for solving nonlinear systems

  • Many researchers have proposed iterative algorithms for nonlinear equations and systems of nonlinear equations; similarly, in this paper, we developed two two-step algorithms of the predictor-corrector type. A combination of Taylor's series and the composition approach was used. One of the algorithms had an eighth order of convergence and a high-efficiency index of approximately 1.5157, which was higher than that of some existing algorithms, while the other possessed fourth-order convergence. The convergence analysis was carried out in both senses, that is, local and semi-local convergence. Various complex polynomials of different degrees were considered for visual analysis via the basins of attraction. We analyzed and compared the proposed algorithms with other existing algorithms having the same features. The visual results showed that the modified algorithms had a higher convergence rate compared to existing algorithms. Real-life systems related to chemistry, astronomy, and neurology were used in the numerical simulations. The numerical simulations of the test problems revealed that the proposed algorithms surpassed similar existing algorithms established in the literature.

    Citation: Dalal Khalid Almutairi, Ioannis K. Argyros, Krzysztof Gdawiec, Sania Qureshi, Amanullah Soomro, Khalid H. Jamali, Marwan Alquran, Asifa Tassaddiq. Algorithms of predictor-corrector type with convergence and stability analysis for solving nonlinear systems[J]. AIMS Mathematics, 2024, 9(11): 32014-32044. doi: 10.3934/math.20241538

    Related Papers:

    [1] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [2] Adisorn Kittisopaporn, Pattrawut Chansangiam . Approximate solutions of the $ 2 $D space-time fractional diffusion equation via a gradient-descent iterative algorithm with Grünwald-Letnikov approximation. AIMS Mathematics, 2022, 7(5): 8471-8490. doi: 10.3934/math.2022472
    [3] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [4] Eunjung Lee, Dojin Kim . Stability analysis of the implicit finite difference schemes for nonlinear Schrödinger equation. AIMS Mathematics, 2022, 7(9): 16349-16365. doi: 10.3934/math.2022893
    [5] Yulin Cheng, Jing Gao . An efficient augmented memoryless quasi-Newton method for solving large-scale unconstrained optimization problems. AIMS Mathematics, 2024, 9(9): 25232-25252. doi: 10.3934/math.20241231
    [6] Murat A. Sultanov, Vladimir E. Misilov, Makhmud A. Sadybekov . Numerical method for solving the subdiffusion differential equation with nonlocal boundary conditions. AIMS Mathematics, 2024, 9(12): 36385-36404. doi: 10.3934/math.20241726
    [7] Hui Zhu, Liangcai Mei, Yingzhen Lin . A new algorithm based on compressed Legendre polynomials for solving boundary value problems. AIMS Mathematics, 2022, 7(3): 3277-3289. doi: 10.3934/math.2022182
    [8] Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia . An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969. doi: 10.3934/math.2025365
    [9] Rubayyi T. Alqahtani, Jean C. Ntonga, Eric Ngondiep . Stability analysis and convergence rate of a two-step predictor-corrector approach for shallow water equations with source terms. AIMS Mathematics, 2023, 8(4): 9265-9289. doi: 10.3934/math.2023465
    [10] Li Wen, Feng Yin, Yimou Liao, Guangxin Huang . A greedy average block Kaczmarz method for the large scaled consistent system of linear equations. AIMS Mathematics, 2022, 7(4): 6792-6806. doi: 10.3934/math.2022378
  • Many researchers have proposed iterative algorithms for nonlinear equations and systems of nonlinear equations; similarly, in this paper, we developed two two-step algorithms of the predictor-corrector type. A combination of Taylor's series and the composition approach was used. One of the algorithms had an eighth order of convergence and a high-efficiency index of approximately 1.5157, which was higher than that of some existing algorithms, while the other possessed fourth-order convergence. The convergence analysis was carried out in both senses, that is, local and semi-local convergence. Various complex polynomials of different degrees were considered for visual analysis via the basins of attraction. We analyzed and compared the proposed algorithms with other existing algorithms having the same features. The visual results showed that the modified algorithms had a higher convergence rate compared to existing algorithms. Real-life systems related to chemistry, astronomy, and neurology were used in the numerical simulations. The numerical simulations of the test problems revealed that the proposed algorithms surpassed similar existing algorithms established in the literature.



    In the field of applied mathematics, solving algebraic and non-algebraic nonlinear equations is an open challenge for mathematicians and scientists. The exact solution of many nonlinear equations is nearly impossible. Therefore, it is important to develop new algorithms for dealing with such equations. Hence, researchers always keep searching for new numerical algorithms for getting the approximate solution of the nonlinear equations of the following type (if taken in univariate case):

    ψ(x)=0, (1.1)

    where ψ is a Fréchet-differentiable operator defined on a nonempty, open convex subset S of a Banach space B with values in a Banach space B. Similarly, if taken into the multivariate case (a system of n nonlinear equations in n unknowns) then one will have the following structure in its vector form:

    ψ(x)=0, (1.2)

    where ψ:RnRn.

    The Newton-Raphson (NR) algorithm is a common iterative algorithm for finding approximate solutions to nonlinear equations with variables that have real values [1]. The fundamental notion of this concept is centered on the use of linear approximations. When presented with an initial estimate for the root of a function, denoted as ψ(x)=0, it is possible to create a linear approximation of the function in the vicinity of this estimate by employing the Taylor series. This approximation facilitates the formulation of the subsequent estimate for the root, which is then iteratively revised to approximate the solution. The algorithm commences by establishing an initial approximation and thereafter iteratively enhances it through the evaluation of the function and its derivative. The aforementioned approach is notable for its expeditious convergence in close proximity to the true root, its simplicity, and its wide-ranging applicability across functions.

    Nevertheless, the NR approach does have its limitations. It is worth noting that the convergence of the algorithm is not always assured and is strongly dependent on the initial estimate. The computational demand or complexity of computing the derivative at each iteration can be significant for certain functions. Moreover, it is worth noting that the procedure may encounter difficulties or deviate from its intended path when launched in proximity to an inflection point or when the derivative approaches zero. When confronted with a function that has numerous roots, the outcome of the approach can differ, as it may converge to any of these roots depending on the initial approximation. To enhance its effectiveness, it is crucial to carefully choose the initial estimate and be aware of the potential constraints of the procedure. Adjustments, like introducing a damping factor, can also be incorporated to enhance its robustness. The classical Newton's technique to find the approximate solution of the non-linear equation (1.1) is given as:

    xj+1=xjψ(xj)ψ(xj),j=1,2,. (1.3)

    This second-order one-step approach has two function evaluations per iteration. The theoretical limit of the one-point techniques is beaten by the multi-step iterative approaches of both higher convergence order and better efficiency index. The formula for the efficiency index is κ=r1/p, where r is the order of the algorithm and p is the number of function evaluations per iteration. The authors proposed solving a system of two equations to solve a scalar problem in [2]. So, the linked system's solutions on the identity line solve the problem. In problematic circumstances where the scalar Newton approach fails, this strategy succeeds. The authors in [3], proposed an alternate algorithm for solving nonlinear equation systems when Newton's algorithm fails. Based on the paper [2], the proposed technique extends systems. The theory behind Newton's algorithm is used to solve an associated system to approximate a system of equations. Many such iterative algorithms have been published.

    Researchers have focused on improving the convergence order and reducing function evaluations, leading to the development of multi-step iterative algorithms. The authors introduced a three-step iterative nonlinear approach for nonlinear equations and systems in the paper [4]. Iterations of the proposed technique need three function evaluations and two first-order derivative checks. The theory proves the proposed strategy is sixth-order convergent. The suggested algorithm is compared to others based on error distributions, computational efficiency, and CPU times. Qureshi et al. in [5] combined existing algorithms to create an optimal fourth-order algorithm for solving scalar and vector forms of problems. A new root-finding algorithm [6] that combines forward and finite-difference methods, is efficient, derivative-free, and cheaper per iteration. Quantitative and graphical studies reveal that the approach is quintic-order convergent and outperforms previous algorithms. An eighth-order three-step algorithm after merging second-order NR with an existing fourth-order algorithm has been devised in [7] that is later proven to be efficient while solving single-variable nonlinear physical models. In a research study [8], a twelfth-order numerical algorithm was proposed to solve the nonlinear equations of the type (1.1), however, the algorithm was four-step and thus required seven function evaluations per iteration. Algorithms for solving nonlinear algebraic equations are also needed while discretizing ordinary and partial differential equations [9,10,11,12].

    The following are some of the well-known iterative algorithms having fourth- and eighth-order convergence selected for comparison with algorithms developed in this research work. For example, Hueso et al. in [13] proposed an iterative fourth-order algorithm, i.e., Jarratt's method. Newton's step is used as a predictor of the following form and abbreviated as PJNM:

    yj=xj23ψ(xj)ψ(xj),xj+1=xjψ(xj)ψ(xj)3ψ(yj)+ψ(xj)6ψ(yj)2ψ(xj), (1.4)

    where j=1,2,.

    Behl in 2015 [14] constructed an optimal algorithm of fourth-order convergence of the following form and abbreviated as KTNM:

    yj=xjψ(xj)ψ(xj),xj+1=yjψ(yj)ψ(xj)ψ(xj)+2ψ(yj)ψ(xj), (1.5)

    where j=1,2,.

    Similarly, Özban and Kaya in [15] proposed two iterative fourth-order algorithms of the following form and abbreviated as CNM1 and CNM2:

    yj=xj2ψ(xj)3ψ(xj),xj+1=xj16ψ(yj)29ψ(xj)2+22ψ(xj)ψ(yj)+3ψ(yj)2ψ(xj)ψ(yj), (1.6)

    and

    yj=xj2ψ(xj)3ψ(xj),xj+1=xj5ψ(xj)212ψ(xj)ψ(yj)+15ψ(yj)28ψ(yj)2ψ(xj)ψ(yj), (1.7)

    where j=1,2,.

    In 2021, Kong-ied proposed a new eight-order algorithm [16], abbreviated as ONM:

    yj=xjψ(xj)ψ(xj),zj=yjψ(xj)2ψ(yj)ψ(xj)2ψ(xj)2ψ(xj)ψ(xj)ψ(yj)+ψ(xj)ψ(yj)2,xj+1=zjψ(zj)ψ(zj), (1.8)

    where j=1,2,.

    Motivated by the current studies in this direction, we attempt to propose two algorithms having eighth- and fourth-order convergence for solving nonlinear models of the type (1.1). The development of the algorithms was extensively supported by alternating Taylor's series expansion and the classical Halley algorithm. Moreover, the developed algorithms are equally applicable to both univariate and multivariate cases.

    This article is structured as follows: In Section 2, we present the basic construction of both algorithms including their local convergence analysis based on Taylor's expansion, while Section 3 presents the local and semilocal convergence analysis for the proposed algorithms under consideration. A detailed visual analysis via the polynomiography of the algorithms is presented in Section 4. To prove the better performance of the proposed algorithms, some numerical experiments (single-variable and system), including both academic and real-life situations, are conducted in Section 5, whereas the concluding remarks with some future directions are described in Section 6.

    In this section, we present the construction of the proposed two algorithms. Moreover, we perform the convergence analysis of the algorithms via Taylor's expansion.

    Suppose that ψ is a real function defined on the interval WR. Let ξW be an exact simple zero for the non-linear equation ψ(x)=0 and further let xj be an initial guess close to the exact root of the nonlinear equation. By the Taylor series expansion, the function ψ is expanded to three terms around the point xj as follows:

    ψ(x)=ψ(xj)+(xxj)ψ(xj)+(xxj)22!ψ(xj)+O(xxj)3. (2.1)

    For getting the next approximation xj+1 for the root of ψ(x)=0 in Eq (2.1), we assume that ψ(xj+1)=0. Hence, we get

    ψ(xj)+(xj+1xj)ψ(xj)+(xj+1xj)22!ψ(xj)=0. (2.2)

    After simplification, we get

    xj+1=xjψ(xj)ψ(xj)(xj+1xj)22ψ(xj)ψ(xj). (2.3)

    From [17], we have

    xj+1xj=2ψ(xj)ψ(xj)2ψ2(xj)ψ(xj)ψ(xj). (2.4)

    Substituting Eq (2.4) into Eq (2.3), we get

    xj+1=xjψ(xj)ψ(xj)(2ψ(xj)ψ(xj)2ψ2(xj)ψ(xj)ψ(xj))22ψ(xj)ψ(xj). (2.5)

    After simplification, we get

    xj+1=xj[ψ(xj)ψ(xj)+2ψ2(xj)ψ(xj)ψ(xj)[2ψ2(xj)ψ(xj)ψ(xj)]2]. (2.6)

    Using the second-order Newton-Raphson algorithm as a predictor and the Eq (2.6) as a corrector, we develop a predictor-corrector (PC) type algorithm as given below:

    yj=xjψ(xj)ψ(xj),xj+1=yj[ψ(yj)ψ(yj)+2ψ2(yj)ψ(yj)ψ(yj)[2ψ2(yj)ψ(yj)ψ(yj)]2], (2.7)

    where j=1,2,. The new PC numerical algorithm given by (2.7) has an eighth-order of convergence and is denoted as PCNM8. The proposed PCNM8 algorithm has five function evaluations (two evaluations of functions, two first-order derivatives, and one second-order derivative), and the efficiency index of this algorithm is r1/51.5157, where r stands for the order of the algorithm. It may be noted that the efficiency index of the proposed eighth-order algorithm is higher than the classical NR algorithm.

    In several research studies, researchers have employed different strategies, including finite differences, weight functions, regularization, and the secant algorithm, among others; to get rid of higher-order derivatives. Following the same approach, we attempt to use several possible approaches to remove the second-order derivative in the above-developed eighth-order algorithm in Eq (2.7). For this purpose, the following structure:

    ψ(yj)=32ψ(xj)ψ(yj)ψ(xj). (2.8)

    for the second derivative taken from [18] is chosen due to its simplicity and better simulation results.

    Substituting Eq (2.8) into Eq (2.7), we get

    yj=xjψ(xj)ψ(xj),xj+1=yj[ψ(yj)ψ(yj)12ψ2(yj)ψ(yj)ψ(xj)(ψ(yj)ψ(xj))(4ψ2(yj)ψ(xj)+3ψ(yj)ψ(yj)3ψ(yj)ψ(xj))2]. (2.9)

    Hence, the algorithm (2.9) is a modified algorithm that is now free from the second derivative and abbreviated as PCNM4.

    Theorem 1. Let ξW be a simple root of a differential function ψ:WRR, where W is an open interval. If x0 is an initial guess considerably near ξ, then the modified algorithm defined by (2.7) has eighth-order convergence and satisfies the following error equation:

    ej+1=3ψ(ξ)764ψ(ξ)7e8j+O(e9j), (2.10)

    where ej=xjξ.

    Proof. Since ξ is a root of ψ and ej=xjξ is the error at jth iteration, we can expand ψ(xj) in power of ej by Taylor's series expansion as follows:

    ψ(xj)=ψ(ξ)ej+12ψ(ξ)e2j+O(ej)3. (2.11)

    By Taylor's series for 1ψ(xj) about ξ, we obtain

    1ψ(xj)=1ψ(ξ)ψ(ξ)ejψ(ξ)2+O(ej)2. (2.12)

    Multiplying (2.11) and (2.12), we get

    ψ(xj)ψ(xj)=ej1ψ(ξ)ψ(ξ)e2j2ψ(ξ)ψ(ξ)2e3j2ψ(ξ)2+O(ej)4. (2.13)

    Using (2.13) in the first step of (2.7), we get

    σj=ψ(ξ)e2j2ψ(ξ)+ψ(ξ)2e3j2ψ(ξ)2+O(ej)4, (2.14)

    where σj=yjξ. By Taylor's series for ψ(yj) about ξ, we obtain

    ψ(yj)=ψ(ξ)σj+12ψ(ξ)σ2j+O(σj)3. (2.15)

    By Taylor's series for 1ψ(yj) about ξ, we obtain

    1ψ(yj)=1ψ(ξ)ψ(ξ)σjψ(ξ)2+O(σj)2. (2.16)

    Multiplying (2.15) and (2.16), we get

    ψ(yj)ψ(yj)=σj1ψ(ξ)ψ(ξ)σ2j2ψ(ξ)ψ(ξ)2σ3j2ψ(ξ)2+O(σj)4. (2.17)

    By Taylor's series for ψ(yj) about ξ, we obtain

    ψ(yj)=ψ(ξ)+ψ(ξ)σj+O(σj)2. (2.18)

    By Taylor's series for ψ(yj) about ξ, we obtain

    ψ(yj)=ψ(ξ)+ψ(ξ)σj+O(σj)2. (2.19)

    Squaring (2.15), we have

    ψ2(yj)=ψ(ξ)2σ2j+ψ(ξ)ψ(ξ)σ3j+ψ(ξ)2σ4j4+O(σj)5. (2.20)

    Squaring (2.18), we have

    ψ2(yj)=ψ(ξ)2+2ψ(ξ)ψ(ξ)σj+ψ(ξ)2σ2j+O(σj)3. (2.21)

    Using the values of ψ2(yj), ψ(yj) and ψ(yj) in numerator, i.e., 2ψ2(yj)ψ(yj)ψ(yj) of the second step of (2.7), we get

    2σ2jψ(ξ)3ψ(ξ)+4σ3jψ(ξ)2ψ(ξ)2+52σ4jψ(ξ)ψ(ξ)3+O(σj)5. (2.22)

    Using the values of ψ(yj), ψ2(yj) and ψ(yj) in denominator, i.e., [2ψ2(yj)ψ(yj)ψ(yj)]2 of the second step of (2.7), we get

    2ψ(ξ)2+2σjψ(ξ)ψ(ξ)+32σ2jψ(ξ)2+σ2jψ(ξ)ψ(ξ)+O(σj)3. (2.23)

    Finally, by using (2.18), (2.22) and (2.23) in the second step of Eq (2.7), we get

    xj+1=ξ3σ4jψ(ξ)34ψ(ξ)3σ5jψ(ξ)48ψ(ξ)4+123σ6jψ(ξ)532ψ(ξ)5+O(σj)7. (2.24)

    By substituting σj=ψ(ξ)e2j2ψ(ξ)+ψ(ξ)2e3j2ψ(ξ)2+O(ej)4 in (2.24), we get

    ej+1=3ψ(ξ)764ψ(ξ)7e8j+O(ej)9. (2.25)

    Hence, Eq (2.25) shows that the proposed algorithm (2.7) has eighth-order of convergence.

    As the convergence analysis carried out for the proposed eighth-order algorithm (2.7), the convergence analysis for the proposed fourth-order algorithm is addressed in the same way.

    Theorem 2. Let ξW be a simple root of a differential function ψ:WRR, where W is an open interval. Let x0 be an initial guess considerably near to the exact root ξ. Then, the modified algorithm defined by (2.9) has fourth-order convergence and satisfies the following error equation:

    ej+1=ψ(ξ)38ψ(ξ)3e4j+O(ej)5. (2.26)

    Proof. Since ξ is a root of ψ and ej=xjξ is the error at jth iteration, we can expand ψ(xj) in power of ej by Taylor's series expansion as follows:

    ψ(xj)=ψ(ξ)ej+12ψ(ξ)e2j+O(ej)3. (2.27)

    By Taylor's series for 1ψ(xj) about ξ, we obtain

    1ψ(xj)=1ψ(ξ)ψ(ξ)ejψ(ξ)2+O(ej)2. (2.28)

    Multiplying (2.27) and (2.28), we get

    ψ(xj)ψ(xj)=ej1ψ(ξ)ψ(ξ)e2j2ψ(ξ)ψ(ξ)2e3j2ψ(ξ)2+O(ej)4. (2.29)

    Using (2.29) in the first step of (2.9), we get

    σj=ψ(ξ)e2j2ψ(ξ)+ψ(ξ)2e3j2ψ(ξ)2+O(ej)4. (2.30)

    By Taylor's series for ψ(yj) about ξ, we obtain

    ψ(yj)=ψ(ξ)σj+12ψ(ξ)σ2j+O(σj)3. (2.31)

    By finding the derivative of ψ(xj) and ψ(yj) respectively, we have

    ψ(xj)=ψ(ξ)+ψ(ξ)ej+O(ej)2, (2.32)
    ψ(yj)=ψ(ξ)+ψ(ξ)σj+O(σj)2. (2.33)

    By Taylor's series for 1ψ(yj) about ξ, we obtain

    1ψ(yj)=1ψ(ξ)ψ(ξ)σjψ(ξ)2+O(σj)2. (2.34)

    Multiplying (2.31) and (2.34), we get

    ψ(yj)ψ(yj)=σj1ψ(ξ)ψ(ξ)σ2j2ψ(ξ)ψ(ξ)2σ3j2ψ(ξ)2+O(σj)4. (2.35)

    By putting all the above equations into second step of (2.9), we get

    ej+1=ψ(ξ)38ψ(ξ)3e4j+O(ej)5. (2.36)

    The error equation (2.36) shows that the proposed algorithm (2.9) has fourth-order convergence.

    The flowchart of the proposed two-step iterative algorithms is shown in Figure 1.

    Figure 1.  Flowchart of the proposed predictor–corrector algorithms.

    There are certain limitations with the local convergence analysis performed in Section 2 for the algorithm (2.7).

    (L1) The algorithm (2.7) can be also used to solve equations on the real line.

    (L2) The existence of at least fourth derivative is required to show convergence, and the solution has to be simple. Consider the example, for W=[1.5,1.5], and function ψ defined on W by ψ(x)=x4log(x)+x5x4 for x0, and ψ(0)=0. Then, clearly ζ=1 solves the equation ψ(x)=0, but ψ(4) does not exist at x=0. Consequently, the results of Section 2 cannot guarantee convergence to ζ although algorithm converges for say x0=1W.

    (L3) There are no commutable error bounds on the distances ||xjζ||. That is, we do not know a priori how many iterates are needed to reach a certain error tolerance.

    (L4) There are no uniqueness of the solution results.

    (L5) The more interesting semi-local convergence is not considered in Section 2.

    (L6) There is no radius of convergence. Thus, we do not know how to pick the initial point x0.

    We positively address the limitations as follows:

    (L1) The convergence is given for Banach space valued equations.

    (L2) The convergence conditions involve only the operators on the algorithm (3.1), i.e., F, F and F.

    (L3) Computable error bounds on ||xjζ|| are provided. Hence, we know in advance the number of iterations to be executed to obtain a certain accuracy.

    (L4) A certain region is specified containing only one solution of the equation Q(x)=0.

    (L5) The semi local convergence analysis of algorithm (2.7) is developed using majorizing sequences.

    (L6) The radius of convergence is specified. This is how we extend the applicability of the algorithm (2.7).

    The algorithm (2.7) in a Banach space setting is given for starter x0D, and each j=0,1,2, by

    yj=xjQ(xj)1Q(xj),zj=yjQ(yj)1Q(yj),Aj=Q(yj)212Q(yj)Q(yj),xj+1=zj12A1jQ(yj)A1jQ(yj)Q(yj)Q(yj), (3.1)

    where the operators Q and Q denote the first and the second Fréchet derivative of Q, respectively, to solve the equation Q(x)=0, where Q:DW1W2 is a twice Fréchet differentiable operator, D is open and convex set and W1,W2 are Banach spaces.

    Notice that Q is linear and Q is a bilinear operator [19].

    It is worth noting that (2.9) cannot be written in Banach space, since the long denominator is not a linear operator.

    The local convergence analysis is based on some conditions.

    Suppose:

    (C1) There exists a function κ0:[0,+)R which is continuous and also non-decreasing (CFND) such that the equation κ0(t)1=0 has a smallest solution which is positive (SSP). Denote such solution by R0. Set T0=[0,R0).

    (C2) There exists CFND κ:T0R such that for h1:T0R defined by

    h1(t)=10κ((1θ)t)dθ1κ0(t),

    the equation h1(t)1=0 has a SSP. Denote such solution by t1.

    (C3) The equation κ0(h1(t))1=0 has a SSP in the interval T0. Denote such solution by R1. Set T1=[0,R1).

    (C4) For h2:T1R defined by

    h2(t)=h1(t)10κ((1θ)h1(t)t)dθ1κ0(th1(t)),

    the equation h2(t)1=0 has a SSP. Denote such solution by t2.

    (C5) Let α,β>0 be given constants, and κ2:T1R be an CFND. For p:T1R defined by

    p(t)=(κ20(th1(t))+2ακ0β(th1(t))+β2κ2(th1(t)))(α+1β10κ0(θth1(t))dθ)h1(t),

    the equation p(t)1=0 has SSP. Denote such solution by R2.

    Set T2=[0,R2). Consider function q:T2R defined by

    q(t)=β21p(t).

    (C6) The equation κ0(th2(t))1=0 has an SSP. Denote such solution by R2. Set T3=[0,R3).

    (C7) Let κ1:T3R and define functions ˉκ:T3R and h3:T3R

    ˉκ(t)={κ1(t)orα+κ0(t)

    and

    h3(t)=h2(t)10κ((1θ)th2(t))dθ1κ0(th2(t))+12β2q2(t)ˉκ(th1(t))κ2(th1(t))(α+1β10κ0(θth1(t))dθ)2th21(t).

    The equation h3(t)1=0 has an SSP. Denote such solution by t3.

    Set

    t0=min{tm},m=1,2,3

    and T=[0,t0). Then it follows by these conditions that for each tT

    0κ0(t)<1, (3.2)
    0κ0(t)th1(t)<1, (3.3)
    0κ0(t)th2(t)<1, (3.4)
    0p(t)<1, (3.5)
    0q(t), (3.6)

    and

    0hm(t)<1. (3.7)

    The constant t0 is proven to be a radius of convergence for the algorithm (3.1) in Theorem 1. First, we relate the developed functions κ,κ0,κ1 and κ2 to the operators on the algorithm (3.1).

    (C8) There exists a solution ζD of the Eq (3.1), an invertible linear operator M such that ||M||α and ||M1||β.

    (C9) ||Q(x)M||1βκ0(||xζ||) for each xD. Define the region D0=DS(ζ,R0), where S(ζ,R0) stands for a ball that is open having center ζ and radius R0>0. It follows by this condition and (C1) that for x=ζ

    ||M1||||Q(x)M||κ0||xζ||<1.

    Thus, from the standard lemma by Banach on linear, and invertible operators [20,21] Q(x) is invertible, and

    ||Q(x)1||β1κ0(||xζ)||). (3.8)

    (C10) ||Q(x)Q(y)||1βκ(||xy||) for each x,yD0.

    Set D1=DS(ζ,R3).

    (C11) ||Q(x)||1βκ1(||xζ||) and ||Q(x)||1βκ2(||xζ||) for each xD1.

    (C12) S[ζ,t0]D, where S[ζ,t0] stands for the closure of S(ζ,t0).

    Notice that the first condition in (C11) can be dropped, since

    ||Q(x)||||Q(x)M+M||||M||+||Q(x)M||α+1βκ0(||xζ||).

    Hence, the function κ1 can be defined by κ1(t)=α+1βκ0(t). Moreover, the function ˉκ given in the condition (C7) is chosen in particular to be the smallest of the functions κ1 and α+1βκ0(t). The developed notation and the conditions (C1)(C12) shall be used in the local analysis of convergence for the algorithm (3.1).

    Theorem 3. Suppose that the conditions (C1)(C12) are satisfied, and x0S0=S(ζ,t0){ζ}. Then, the following assertions hold for each j=0,1,2,

    {xj}S(ζ,t0), (3.9)
    ||yjζ||h1(||xjζ||)||xjζ||||xjζ||<t0, (3.10)
    ||zjζ||h2(||xjζ||)||xjζ||||xjζ||, (3.11)
    ||xj+1ζ||h3(||xjζ||)||xjζ||||xjζ||, (3.12)

    and the sequence {xj} is convergent to ζ.

    Proof. By assuming x0S0, the assertion (3.9) is satisfied for j=0. Let xS(ζ,t0). Then, by the condition (C1), Q(x) is invertible, and the estimate (3.8) holds. In particular, for x=x0 the linear operator Q(x0) is invertible, since x0S0. Thus, the iterate y0 is well defined. Then, we can write by the first substep of the algorithm (3.1) for j=0:

    y0ζ=x0ζQ(x0)1Q(x0)=Q(x0)1[Q(ζ+θ(x0ζ))Q(x0)]dθ(x0ζ). (3.13)

    By (3.8), the condition (3.9), the definition of the function h1, (3.7), and (3.13), we have in turn

    ||y0ζ||β1κ0(||x0ζ||)10κ((1θ)||x0ζ||)dθβ||x0ζ||h1(||x0ζ||)||x0ζ||||x0ζ||<t0, (3.14)

    so the assertion (3.10) is satisfied, the iterate y0S(ζ,t0), and consequently (3.8) holds if x=y0. Thus, the iterate z0 is well defined. By the second substep of the algorithm (3.1) for j=0:

    z0ζ=y0ζQ(y0)1Q(y0)=Q(y0)1[Q(ζ+θ(y0ζ))Q(y0)]dθ(y0ζ), (3.15)

    leading as in (3.13) and (3.14) to

    ||z0ζ||h2(||x0ζ||)||x0ζ||||x0ζ||, (3.16)

    showing the assertion (3.11), the iterate z0S(ζ,t0), and the estimate (3.8) holds for x=z0. We shall show the invertability of the operator A0, which implies the existence of the iterate x1. Consider the identity

    A0M2=(Q(y0)M+M)212Q(y0)Q(y0)M2=(Q(y0)M)2+(Q(y0)M)M+M(Q(y0)M)+M212Q(y0)Q(y0)M2=(Q(y0)M)2+(Q(y0)M)M+M(Q(y0)M)12Q(y0)Q(y0). (3.17)

    In view of (C7), the second condition in (C11), (C5), (3.5)–(3.7), and (3.17), we get

    ||M2||||A0M2||β2[κ20(||y0ζ||)β2+2αβκ0(||y0ζ||)+12κ2(||y0ζ||)β(α+1β10κ0(θ||y0ζ||)dθ)||y0ζ||]=p0<1, (3.18)

    where we also use S0||A10||q0, (3.8)

    ||Q(y0)||=Q(y0)Q(ζ)=10Q(ζ+θ(y0ζ))dθ||y0ζ||. (3.19)

    Thus,

    ||Q(y0)||||10Q(ζ+θ(y0ζ))dθN+M||α+1β10κ0(θ||y0ζ||)dθ.

    Then, we can write by the third substep of algorithm (3.1)

    x1ζ=z0ζ12A10Q(y0)A10Q(y0)Q(y0)Q(y0). (3.20)

    It follows by (C8)(C11), (3.7) (for j=3), (3.14), (3.15), (3.19), and (3.20) that

    ||x1ζ||||z0ζ|+12||A10||2||Q(y0)||||Q(y0)||||Q(y0)||210κ((1θ)||z0ζ||)dθ||z0ζ||1κ0(||z0ζ||)+12q20ˉκβ2(||y0ζ||)κ2(||y0ζ||)(α+1β10κ0(θ||y0ζ||)dθ)2||y0ζ||2h3(||x0ζ||)||x0ζ||||x0ζ||, (3.21)

    showing the assertion (3.12) and x1S(ζ,t0). Simply switch x0,y0,z0,x1 by xk,yk,zk,xk+1 in the preceding calculations to complete the induction for the assertions (3.9)–(3.12). Then, from the estimation

    ||xk+1ζ||||xkζ||<t0,

    we deduce that xk+1S(ζ,t0) and

    limk+xk=ζ.

    Next, a neighborhood of ζ is determined that contains no other solution.

    Proposition 1. Suppose there exists a solution ζS(ζ,t4)D of the equation Q(x)=0, such that the condition (C9) holds on S(ζ,t4), and there exists t5t4 so that

    1β10κ0(θt5)dθ<1. (3.22)

    Define the region D2=DS[ζ,t5]. Then, the only solution of the Eq (1.1) in the region D2 is ζ.

    Proof. Let E=10Q(ζ+θ(uζ))dθ for some uD2 with Q(u)=0. By applying the condition (C9), and the assumption (3.22), we obtain in turn

    ||M1||||EM||10κ0(θ(uζ))dθ10κ0(θt5)dθ<1,

    so E is invertible. Then, from the identity

    uζ=E1(Q(u)Q(ζ))=E1(0)=0,

    we deduce that u=ζ.

    This analysis is based on majorizing sequences. The role of x0 and the κ function is replaced by ζ and the ϑ.

    Suppose:

    (H1) There exists CFND ϑ0:[0,+)R such that equ ation ϑ0(t)1=0 has an SSP. Denote such solution by δ0.

    (H2) There exist CFND ϑ,ϑ1,ϑ2:[0,δ0)R. Define the sequences {aj},{bj},{cj} for a0=0, some b00, and each j=0,1,, by

    sj=1β10v((1θ))dθ(bjaj),cj=bj+sj1ϑ0(bj)λj=ϑ20(bj)+2αβϑ0(bj)+β2ϑ2(bj)sj,μj=β21λj,ˉϑ={ϑ1(bj)orα+ϑ0(bj)β,aj+1=cj+12μ2n¯ϑ(bj)βϑ2(bj)βs2j,βγj+1=10ϑ((1θ)(aj+1aj))dθ(aj+1aj)+(α+1βϑ0(aj))(aj+1bj),bj+1=aj+1+γj+11ϑ0(aj+1). (3.23)

    These scalar sequences are shown to be majorizing for the algorithm (3.1) in Theorem 2. However, a convergence condition for these sequences is needed.

    (H3) There exists δ[0,δ0) such that for each j=0,1,2,

    ϑ0(aj)<1,ϑ0(bj)<1,λj<1, and ajδ.

    It follows by this definition and (3.23) that 0ajbjcjaj+1δ, and there exists a[0,δ] such that

    limj+aj=a.

    The functions ϑ relate to the operator on the algorithm.

    (H4) There exist constants α,β>0, and an invertiable operator M.

    (H5) For some x0D, and each xD

    ||Q(x)M||1βϑ0(||xx0||).

    Set D3=DS(x0,δ0).

    Notice that if x=x0, we get ||M1||||Q(x0)M||ϑ0(0)<1, by the definition of δ0. Thus, the linear operator Q(x0) is invertible. Set ||Q(x0)1Q(x0)||b0.

    (H6) For each x,yD3

    ||Q(x)Q(y)||1βϑ(||xy||),||Q(x)||1βϑ1(||xx0||),||Q(x)||1βϑ2(||xx0||).

    (H7) S[x0,a]D.

    Next, the semilocal analysis of convergence for the algorithm (3.1) is developed under the conditions (H1)(H7).

    Theorem 4. Suppose that the conditions (H1)(H7) hold. Then, the sequence {xj} generated by the algorithm (3.1), exists in S(x0,a), stays in S(x0,a) for each j=0,1,2,, and is convergent to a solution ζS[x0,a] of the equation Q(x)=0, such that for each j=0,1,2,

    ||ζxj||aaj. (3.24)

    Proof. Mathematical induction is utilized to show the assertions

    ||yjxj||bjaj, (3.25)
    ||zjyj||cjbj, (3.26)
    ||xj+1zj||aj+1cj. (3.27)

    The assertion (3.26) holds if j=0, since by the first substep of algorithm (3.1), and the definition of b0:

    ||y0x0||=||Q(x0)1Q(x0)||b0=b0a0<a.

    Thus, the iterate y0S(x0,a). We can write by the first substep of (3.1) that

    Q(y0)=Q(y0)Q(x0)Q(x0)(y0x0)=[Q(x0+θ(y0x0))Q(x0)]dθ(y0x0). (3.28)

    Using the first condition in (H6)

    ||Q(y0)||1β10ϑ((1θ)||y0x0||)dθ||y0x0||1β10ϑ((1θ)(b0a0))dθ(b0a0)=δ0. (3.29)

    By the condition (H4) for x=y0

    ||M1||||Q(y0)M||ϑ0(||y0x0||)ϑ0(b0)<1,

    Thus, Q(y0) is invertible, and

    ||Q(y0)1||β1κ0(b0), (3.30)

    and the iterate z0 is well defined by the second substep of (3.1). Then, we can also write

    z0y0=Q(y0)1Q(y0). (3.31)

    In view of (3.23) and (3.29)–(3.31), we get

    ||z0y0||δ01ϑ0(bj)=c0b0,

    and

    ||z0x0||||z0y0||+||y0x0||c0b0+b0a0=c0<a.

    Hence, the assertion (3.29) holds if j=0, and z0S(x0,a).

    As in the local case:

    ||M2||||A0M2||β2[ϑ20(||yjx0||)β2+2αβϑ0(||yjx0||)+12βϑ2(||yjx0||)δj]β2[ϑ20(bj)β2+2αβϑ0(bj)+12βϑ0(bj)+12βϑ2(bj)δj]=λj<1,

    so, A0 is invertible, and

    ||A10||μ0. (3.32)

    Notice, also that Q(z0) is invertible, so the iterate x1 is well defined by the third substep of (3.1). Then, we have by (3.1)

    x1z0=12A10Q(y0)A10Q(y0)Q(y0)Q(y0),

    leading to

    ||x1z0||12μ20ˉϑ(b0)βϑ2(b0)βδ20=a1c0,

    and

    ||x1z0||||x1z0||+||z0x0||a1c0+c0a0=a1<a.

    Thus, the assertion (3.30) holds for j=0, and the iterate x1S(z0,a). We can write

    Q(xm+1)=Q(xm+1)Q(xm)Q(xm)(xm+1xm)+Q(xm)(ymxm),

    leading to

    ||Q(xm+1)||1β10ϑ((1θ)||xm+1xm||)dθ||xm+1xm||+(α+1βϑ0(||xmx0||))||xm+1ym||1β10ϑ((1θ)(am+1am))dθ(am+1am)+(α+1βϑ0(am))(am+1bm)=γm+1, (3.33)

    so

    ||ym+1xm+1||||Q(xm+1)1||||Q(xm+1)||γm+11ϑ0(||xm+1x0||)γm+11ϑ0(am+1)=bm+1am+1, (3.34)

    and

    ||ym+1x0||||ym+1xm+1||+||xm+1x0||bm+1am+1+am+1a0=bm+1<a.

    The induction for the assertions (3.25)–(3.27) is completed if x0, y0, z0, x1 are replaced by xm, ym, zm, xm+1, respectively. Then, we have

    ||xm+1xm||||xm+1ym||+||ymxm||am+1bm+bmam=am+1am.

    Consequently, the sequence {xm} is Cauchy in a Banach space and such it is convergent to some ζS[x0,a]. By letting m+ in (3.33) we get Q(ζ)=0. Moreover, by the estimate

    ||xj+kxj||aj+kaj, (3.35)

    we obtain (3.24) by letting k+ in (3.34).

    As in the local case a neighborhood of x0 is special containing only one solution in the next result.

    Proposition 2. Suppose there exists a solution yS(x0,t7) of the Eq (3.1).

    The condition (H5) holds in ball S(x0,t7), and there exists t8t7 such that

    β10ϑ0(θt7+(1θ)t8)dθ<1. (3.36)

    Define the region D4=DS[x0,t8].

    Then, the only solution of the equation Q(x)=0 in the region D4 is y.

    Proof. Let E1=10Q(y+θ(zy)dθ for some zD4 with Q(z)=0. It follows by the condition (H5) and (3.36)

    ||M1||||E1M||β10ϑ0(θ||yx0||+(1θ)||zx0||)dθβ10ϑ0(θt7+(1θ)t8)dθ<1.

    Therefore, the operator E1 is invertible. Then, from the identity

    zy=E11(Q(z)Q(y))=E1(0)=0,

    we deduce that z=y.

    Remark 1. (i) Popular choices for M=Q(ζ) (local case) and M=Q(x0) (semilocal case). However, these choices are not the necessary the most flexible. Other choices exist [22,23].

    (ii) Note that not all the conditions (H1)(H7) are assumed in Proposition 1. However, if they are, we can take y=ζ, and t7=a.

    (iii) The constant δ0 can replace the limit point a is the condition (H7).

    The analysis of stability and speed of convergence is an essential aspect of analyzing every root-finding algorithm. The standard way of performing such an analysis is the use of visual approach via the so-called polynomiography [24,25]. In polynomiography, we generate images called polynomiographs using a general algorithm presented in Algorithm 1. Depending on the coloring algorithm in the last step of the algorithm, we can visualize various aspects of the root-finding algorithm.

    Algorithm 1: Generation of a polynomiograph.
    Input: pC[Z], degp2 – polynomial; R – root finding algorithm; AC – area; N – the maximum number of iterations; ε – accuracy; colors – color map.
    Output: Polynomiograph for the complex-valued polynomial p within the area A.
    1 for z0A do

    In this section, we use a coloring algorithm that shows basins of attraction and speed of convergence in the same polynomiograph [26]. In this algorithm, each root of the considered polynomial gets a distinct color. We also need an additional color to depict the non-convergent points (we use the black color for this aim). Now, to color the given starting point z0, we first check whether the root-finding algorithm has converged to any of the roots, i.e., if n<N, where n is the number of the last iterate. If the algorithm has converged, then, for the found root, we find the closest root of the polynomial and use its color to color z0, else we use the additional color. In this way, we show basins of attraction. To add the information on the speed of convergence, we change the shade of the color based on the number of performed iteration. In this way, a light color will show a small number of iterations and a dark color a large number of iterations.

    Based on polynomiographs, we can compute some numerical measures that will allow the analysis to be broadened. The three most widely used measures are: The average number of iterations (ANI) [27], convergence area index (CAI) [27], and the computations time [28]. To compute the average number of iterations, we take the information on the speed of convergence in the polynomiograph because it contains the number of iterations needed to find the root for all points in the considered area A. CAI is the ratio of the number of starting points that converged to any root to the number of all points in the considered area A. The CAI value is a real number between 0 and 1, and it gives us information on the percentage of points in A that have converged to roots.

    The polynomiographs used for the analysis presented in this section were generated for the following common parameters: N=30, ε=0.001, and polynomiograph resolution 1000×1000 pixels. In our analysis, we use three polynomials:

    p3(z)=z31, roots: 1, 0.5+32i, 0.532i, the area A=[2,2]2,

    p4(z)=z410z2+9, roots: 3, 1, 1, 3, the area A=[4,4]2,

    p5(z)=z5z, roots: 0, 1, 1, i, i, the area A=[2,2]2.

    The program for generating polynomiographs was implemented in Mathematica 13.2 using the parallelization option of the Compile command. Moreover, the polynomiographs were generated on the computer with the specifications: Intel i5-9600K (@3.70 GHz), 32 GB DDR4 RAM, and Windows 10 (64 bit). Apart from the two proposed algorithms, PCNM8 and PCNM4, we generated polynomiographs for the PJNM, KTNM, CNM1, CNM2, and ONM algorithms for comparison purposes.

    The obtained results for the p3 polynomial are presented in Figure 2 (the polynomiographs) and in Table 1 (the numerical measures). From the polynomiographs, we see that each algorithm has different basins of attraction. However, in each case, we see three characteristic braids. The degree of interweaving of the basins tells us about the stability of the algorithm (the less the interweaving, the better the stability of the algorithm). Based on the polynomiographs, we see that the best stability is obtained by the PJNM algorithm, followed by the PCNM8 and ONM algorithms. For the CNM1 algorithm, we also see a low degree of interweaving, but there are also non-convergent points (the black areas) which are not visible in the three algorithms mentioned. This is confirmed by the CAI measure presented in Table 1. The PJNM, PCNM8 and ONM algorithms all obtained values equal to 1.0, whereas for the CNM1 algorithm, the CAI value is less than 1.0, namely it is equal to 0.967. The lowest value of CAI was obtained by the CNM2 algorithm (0.946). When it comes to the speed of convergence, we see that the lightest colors are visible in the polynomiographs for the PCNM8 and ONM algorithms, followed by the PCNM4 and CNM1 algorithms. Again, we can confirm this observation by looking at the values of ANI gathered in Table 1. The lowest value of ANI was obtained by the PCNM8 algorithm (1.496), and the second lowest value equal to 1.624 was obtained by the ONM algorithm. Now, if we look at the generation time of polynomiographs, we see that the fastest algorithm is the CNM1 algorithm (0.762 s). The two best algorithms in terms of ANI, i.e., the PCNM8 and ONM, obtained slightly slower times equal to 0.778 and 0.775 s, respectively. The slowest algorithm is the KTNM algorithm (0.914 s).

    Figure 2.  Polynomiographs for the complex polynomial p3 generated using various root-finding algorithms.
    Table 1.  Numerical measures obtained from the polynomiographs generated for p3 (Figure 2).
    Algorithm ANI CAI Time [s]
    PCNM8 1.496 1.0 0.778
    PCNM4 2.320 1.0 0.774
    PJNM 5.562 1.0 0.832
    KTNM 5.358 0.973 0.914
    CNM1 2.642 0.967 0.762
    CNM2 3.840 0.946 0.808
    ONM 1.624 1.0 0.775

     | Show Table
    DownLoad: CSV

    In the next example, we present and analyze the results obtained for the p4 polynomial. In Figure 3, we see the polynomiographs for this polynomial, whereas in Table 2, we gather the values of numerical measures. In the polynomiographs, we see four basins of attraction. The most significant difference in those basins is visible at the boundaries, where the interweaving of the basins appears. The lowest degree of interweaving can be observed for the PCNM4, PJNM, and CNM1 algorithms. Thus, these three algorithms obtained the best stability among the analyzed algorithms. However, for the CNM1 algorithm, we can also observe some non-convergent points in the polynomiograph, which causes the CAI value to be equal to 0.999. The other two algorithms obtained a CAI value of 1.0. There are also two other algorithms with full convergence (CAI value equal to 1.0), namely the PCNM8 and ONM algorithms. In terms of speed of convergence, the best results are obtained by the ONM and PCNM8 algorithms, for which the ANI value is equal to 1.566 and 1.660, respectively. We can observe this in the polynomiographs because these two algorithms have the biggest light areas. On the other extreme, the worst speed of convergence can be observed for the PJNM algorithm (the darkest colors and the value of ANI equal to 7.561). If we look at the generation times, which reflect the real time of computations, we see that the fastest algorithm was the PCNM8 algorithm, followed by the CNM1 algorithm. These two algorithms obtained the times equal to 0.781 and 0.840 s, respectively. The slowest algorithm among the analyzed algorithms was the PJNM algorithm, which obtained a time equal to 1.085 s.

    Figure 3.  Polynomiographs for the complex polynomial p4 generated using various root-finding algorithms.
    Table 2.  Numerical measures obtained from the polynomiographs generated for p4 (Figure 3).
    Algorithm ANI CAI Time [s]
    PCNM8 1.660 1.0 0.781
    PCNM4 2.212 1.0 0.901
    PJNM 7.561 1.0 1.085
    KTNM 3.948 0.993 0.892
    CNM1 2.201 0.999 0.840
    CNM2 2.593 0.998 0.858
    ONM 1.566 1.0 0.844

     | Show Table
    DownLoad: CSV

    In the last example, we visually analyze the results obtained for the p5 polynomial. The polynomiographs generated for this polynomial are presented in Figure 4, and the calculated numerical measures are in Table 3. The best stability is observed for the PJNM algorithm, followed by the CNM1 algorithm. For these two algorithms, the interweaving of the basins is the smallest. However, for the CNM1 algorithm, we can observe areas of non-convergent points, whereas, for the PJNM algorithm, we do not see any such points. The values of CAI gathered in Table 3 confirm this observation, where the PJNM and CNM1 algorithms obtained values of 1.0 and 0.997, respectively. The highest value of CAI (1.0) is also obtained by three other analyzed algorithms, namely by the PCNM8, PCNM4, and ONM algorithms. The lowest value of 0.982 is obtained by the KTNM algorithm, which is clearly visible in the polynomiograph, where we see many black areas. Now, if we analyze the speed of convergence, then we notice that the fastest algorithms are the ONM and PCNM8 algorithms, for which we see the lightest shades of the basins' colors. Consequently, these two algorithms obtained the lowest values of ANI, equal to 1.556 and 2.013, respectively. The worst speed of convergence is visible for the PJNM algorithm. Moreover, this algorithm also obtained the highest ANI value, which was equal to 5.678. When it comes to the generation times, like for the p3 polynomial, the shortest time of 0.834 s was obtained by the CNM1 algorithm. The second best algorithm with a time equal to 0.838 s is the PCNM8 algorithm.

    Figure 4.  Polynomiographs for the complex polynomial p5 generated using various root-finding algorithms.
    Table 3.  Numerical measures obtained from the polynomiographs generated for p5 (Figure 4).
    Algorithm ANI CAI Time [s]
    PCNM8 2.013 1.0 0.838
    PCNM4 2.413 1.0 0.930
    PJNM 5.678 1.0 0.994
    KTNM 4.378 0.982 0.931
    CNM1 2.321 0.997 0.834
    CNM2 2.758 0.994 0.878
    ONM 1.556 1.0 0.859

     | Show Table
    DownLoad: CSV

    In this section, we investigate the effectiveness of newly proposed fourth- and eighth-order algorithms for solving several nonlinear real-world application problems including single and multi-variables. Several parameters have been specified to compare the proposed algorithms with some of the existing ones having similar order of convergence. The parameters include the number of iterations (NI), the number of function evaluations (NFE), the absolute error at the final iteration (|η|=|xNxN1|), the absolute value of the nonlinear function (|ψ(xN)|), and the CPU time (in seconds). As the stopping criterion, we use the following tolerance:

    |η|=|xNxN1|10300.

    Problem 1. For the first nonlinear model, we consider a path traversed by an electron in the air gap between two parallel plates considering the multi-factor effect, which is given by [29]

    μ(x)=μ0+(v0+c0E0mωsinωx0+σ)(xx0)+c0E0mω2(cos(ωx+σ)+sin(ωx+σ)), (5.1)

    where μ0 and v0 are the position and velocity of the electron at time x, m and c0 are the mass and the charge of the electron at rest, respectively, and E0sin(ωx+σ) is the radio frequency electric field between the plates.

    If particular parameters are chosen, Eq (5.1) can be simplified as

    ψ1(x)=x0.5cosx+π4=0. (5.2)

    The results of numerical simulations of an engineering application Problem 1 carried out for a fixed number of iterations, that is, when NI =7 and x0=10.5 are presented in Table 4. It is observed that the proposed fourth-order PCNM4 algorithm produces, at the final iteration, the smallest absolute error and the smallest absolute value of the function itself when compared to the other existing algorithms. This proposed fourth-order algorithm has the advantage of less CPU time consumption in comparison to other fourth-order existing algorithms. Similarly, in Table 5, it is observed that the existing algorithm of eight-order convergence (ONM) takes the fewest number of iterations (NI) in comparison to the other algorithms, while the proposed eight-order PCNM8 algorithm produces the smallest absolute error when compared to the other algorithms. In Table 5, we observe that the existing algorithms of order four PJNM, KTNM, CNM1, and CNM2 produce the smallest absolute errors when compared to the proposed fourth-order PCNM4, while the PCNM4 algorithm takes a fewer number of iterations in comparison to the other fourth-order existing algorithms.

    Table 4.  Numerical simulations for Problem 1 for initial guess (x0=10.5) and number of iterations (NI=7).
    Algorithm |η| |ψ1(xN)| CPU Time [s]
    PCNM4 2.5741e-505 8.2489e-2021 4.88e-01
    PJNM 2.9844e-268 7.5068e-1073 9.68e-01
    KTNM 1.2395e-135 2.0495e-541 9.06e-01
    CNM1 8.5050e-47 5.7423e-94 7.35e-01
    CNM2 8.5050e-47 5.7423e-94 5.78e-01

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical simulations for Problem 1: ψ1(x)=0.
    Algorithm NI NFE |η| |ψ1(xN)| CPU Time [s]
    PCNM4 6 24 2.957e-343 1.437e-1372 3.047e+00
    PCNM8 6 30 1.003e-1186 5.724e-7120 2.484e+00
    PJNM 7 21 3.185e-992 9.742e-3969 2.281e+00
    KTNM 7 21 7.745e-454 3.124e-1814 2.156e+00
    CNM1 10 30 8.112e-385 5.224e-770 3.093e+00
    CNM2 10 30 8.112e-385 5.224e-770 3.219e+00
    ONM 5 25 1.260e-732 1.890e-5859 2.719e+00

     | Show Table
    DownLoad: CSV

    Problem 2. NASA's "Wind" Satellite's Orbit. NASA intends to launch a spacecraft named Wind that will remain in a fixed position along a line from the earth to the sun so that the solar wind will travel by the satellite on its approach to the earth [30]. We devise the following equation based on the relevant physical laws:

    ψ2(r)=r3(Rr)2ω2GMs(Rr)2+GMer2, (5.3)

    where, G=6.67×1011 [Nm2/kg2], Ms=1.98×1030[kg], Me=5.98×1024[kg], m= the mass of satellite [kg], R=1.49×1011[m], T=3.15576×107[s] and ω=2πT. The required solution to the non-linear real-world problem ψ2(r)=0 is r=1.4761775009615046593×1011.

    The numerical simulations of the real-life application problem carried out in Table 6 take a fixed number of iterations, meaning NI =11 and x0=7.6. It is observed that the existing fourth-order PJNM algorithm produces, at the final iteration, the smallest absolute error and the smallest absolute value of the function itself when compared to the other algorithms. However, the proposed fourth-order algorithm PCNM4 has the advantage of less CPU time consumption in comparison to other fourth-order existing algorithms. Similarly, by fixing an initial guess of x0=5.5, the numerical results are presented in Table 7. It is observed that the proposed PCNM8 and existing algorithm ONM (both having eighth-order convergence) take the fewest number of iterations in comparison to the other algorithms, whereas the existing eight-order ONM algorithm produces the smallest absolute error when compared to the other algorithms. We further observe that the PCNM4 and PJNM fourth-order algorithms takes the fewest number iteration (NI) in comparison to the other fourth-order algorithms KJNM, CNM1, and CNM2.

    Table 6.  Numerical simulations for Problem 2 for initial guess (x0=7.6) and number of iterations (NI=11).
    Algorithm |η| |ψ2(rN)| CPU Time [s]
    PCNM4 1.3454e-810 1.6763e-3239 1.37e-01
    PJNM 9.2337e-997 2.4946e-3984 1.88e-01
    KTNM 1.7197e-155 2.0906e-618 1.56e-01
    CNM1 4.1304e-35 6.6419e-51 1.40e-01
    CNM2 4.1304e-35 6.6419e-51 1.40e-01

     | Show Table
    DownLoad: CSV
    Table 7.  Numerical simulations for Problem 2: ψ2(r)=0.
    Algorithm NI NFE |η| |ψ2(rN)| CPU Time [s]
    PCNM4 11 44 1.345e-810 1.676e-324 2.578e+00
    PCNM8 8 40 8.254e-727 2.333e-4375 8.430e-01
    PJNM 11 33 9.234e-997 2.495e-3984 9.070e-01
    KTNM 12 36 1.185e-646 4.701e-2583 9.540e-01
    CNM1 14 42 2.156e-343 1.801e-667 7.820e-01
    CNM2 14 42 2.156e-343 1.801e-667 7.810e-01
    ONM 8 40 1.02e-1078 3.290e-8660 1.047e+00

     | Show Table
    DownLoad: CSV

    Problem 3. The system of nonlinear equations from [31] is:

    x1+exp(x2)cos(x2)=0,3x1x2sin(x1)=0. (5.4)

    The numerical simulations for solving the system numerical result carried out in Table 8 take a fixed number of iterations, that is NI=9, and initial guess for two by two non linear system are x1=0.1 and x2=0.2. It is observed that the proposed algorithm PCNM4 produces the smallest absolute error when compared to the other algorithms. The proposed fourth-order algorithm PCNM4 has the advantage of less CPU time consumption in comparison to other fourth-order existing algorithms.

    Table 8.  Numerical simulations for Problem 3.
    Algorithm ||xnxn1|| [x1,x2]T CPU Time [s]
    PCNM4 2.00e-24002 6.77e-12003, 1.35e-12002 1.50e-02
    PJNM 2.0354e-2313 -6.9050e-4627, -1.3810e-4626 3.12e-01
    KTNM 1.7120e-4255 1.1777e-8510, 1.2507e-8510 2.97e-01
    CNM1 8.6509e-1102 -5.2950e-2203, -1.0602e-2202 3.43e-01
    CNM2 2.9729e-1084 -6.6197e-2168, -1.3258e-2167 3.44e-01

     | Show Table
    DownLoad: CSV

    Problem 4. The system of three nonlinear equations is [32]:

    15x1+x224x313=0,x21+10x2exp(x3)11=0,x2225x3+22=0. (5.5)

    The numerical simulations for solving the system (5.5) carried out in Table 9 take a fixed number of iterations, that is NI=6, and the initial guesses for the three-by-three nonlinear system are chosen to be x1=0.8, x2=1.0, x3=0.8. Although all the algorithms converge, we observe that the proposed algorithm PCNM4 produces the smallest absolute error when compared to the other algorithms. The proposed fourth-order algorithm PCNM4 has the advantage of less CPU time consumption in comparison to other fourth-order existing algorithms.

    Table 9.  Numerical simulations for Problem 4.
    Algorithm ||xnxn1|| [x1,x2,x3]T CPU Time[s]
    PCNM4 1.91e-1754 1.04e+00, 1.0300e+00, 9.2300e-01 1.72e-01
    PJNM 2.5908e-08 1.0418e+00, 1.0312e+00, 9.2254e-01 2.35e-01
    KTNM 1.2176e-52 1.0418e+00, 1.0312e+00, 9.2254e-01 1.88e-01
    CNM1 2.3203e-08 1.0418e+00, 1.0312e+00, 9.2254e-01 2.66e-01
    CNM2 2.2706e-08 1.0418e+00, 1.0312e+00, 9.2254e-01 2.34e-01

     | Show Table
    DownLoad: CSV

    Problem 5. Neurophysiology application [33]:

    The nonlinear model consists of the following six equations:

    x21+x23=1,x22+x24=1,x5x33+x6x34=c1,x5x31+x6x32=c2,x5x1x23+x6x24x2=c3,x5x21x3+x6x22x4=c4. (5.6)

    The constants ci in (5.6) can be randomly chosen. In our experiment, we considered ci=0 for i=1,,4.

    The numerical simulations for solving the real-life application system used in medical science (Neurophysiology) are carried out in Table 10. Neurophysiology is the branch of physiology that focuses on the study of the nervous system, including the brain, spinal cord, and peripheral nerves. The numerical simulations carried out in Table 10 take a fixed number of iterations, that is NI=7, and the initial guesses for the system given in (5.6) are chosen to be x1=0.9, x2=0.8, x3=0.7, x4=0.5, x5=0.3, x6=0.1. It is observed that the proposed algorithm, PCNM4, produces the smallest absolute error when compared to the other algorithms. The existing fourth-order algorithms taken for comparison, namely; PJNM, CNM1, and CNM2 diverge from the exact solution, while the algorithm KTNM converges but is comparatively slower than the proposed one. Hence, the performance of the proposed fourth-order algorithm for the nonlinear model (5.6) is far better than some of the existing approaches.

    Table 10.  Numerical simulations for Problem 5.
    Algorithm ||xnxn1|| [x1,x2,x3,,x6]T
    PCNM4 1.54e-4839 7.89e-01, 8.48e-01, 6.14e-01, 5.30e-01, 3.06e-19384, 9.00e-21684
    PJNM 7.4364e+05 7.88e-01, 8.48e-01, 6.13e-01, 5.30e-01, -2.53e+05, -8.29e+05
    KTNM 5.021e-30 7.99e-01, 8.63e-01, 6.01e-01 5.05e-01, -5.19e-59, 7.56e-60
    CNM1 4.30e+12 5.75e+08, 4.99e+08, 5.66e+09, -4.51e+08, 4.38e+12, -2.68e+12
    CNM2 1.07e+4129 -8.4e+4121, 3.1e+4123, 1.0e+4122, -4.9e+4123, -1.4e+4128, 1.0e+4129

     | Show Table
    DownLoad: CSV

    We introduce two predictor-corrector algorithms, amalgamating Taylor's series and the composition approach. One of these algorithms demonstrates eighth-order convergence alongside a high-efficiency index of approximately 1.5157, surpassing certain established techniques. The other algorithm achieves fourth-order convergence. Our comprehensive convergence analyses encompass both local and semilocal aspects. We use a number of complex polynomials and polynomiographs that show how much faster the modified algorithms converge, which is especially clear when we compare basins of attraction to other algorithms. Real-world applications in chemistry, astronomy, and neurology validate our simulations, where the proposed algorithms consistently outperform their numerical counterparts. It is intended to return to the proposed approach in the future to make any necessary adjustments to attain the best potential algorithm.

    D. K. Almutairi: Validation, writing – review and editing; I. K. Argyros: Methodology, formal analysis, writing – review and editing; K. Gdawiec: Formal analysis, software, visualization, writing – original draft, writing – review and editing; S. Qureshi: Conceptualization, software, writing – original draft; A. Soomro: Software, investigation, writing – original draft; K. H. Jamali: Supervision, validation; M. Alquran: Supervision, resources, data curation; A. Tassaddiq: Conceptualization, methodology, validation, formal analysis, investigation, resources, writing – review and editing, project administration, funding acquisition. All authors have read and approved the final version of the manuscript for publication.

    The author extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number (ER-2024-1216).

    The authors declare that they have no conflict of interest.



    [1] T. J. Ypma, Historical development of the Newton-Raphson method, SIAM Rev., 37 (1995), 531–551.
    [2] H. Ramos, J. Vigo-Aguiar, The application of Newton's method in vector form for solving nonlinear scalar equations where the classical newton method fails, J. Comput. Appl. Math., 275 (2015), 228–237. https://doi.org/10.1016/j.cam.2014.07.028 doi: 10.1016/j.cam.2014.07.028
    [3] H. Ramos, M. T. T. Monteiro, A new approach based on the Newton's method to solve systems of nonlinear equations, J. Comput. Appl. Math., 318 (2017), 3–13. https://doi.org/10.1016/j.cam.2016.12.019 doi: 10.1016/j.cam.2016.12.019
    [4] H. A. Abro, M. M. Shaikh, A new time-efficient and convergent nonlinear solver, Appl. Math. Comput., 355 (2019), 516–536. https://doi.org/10.1016/j.amc.2019.03.012 doi: 10.1016/j.amc.2019.03.012
    [5] S. Qureshi, I. K. Argyros, A. Soomro, K. Gdawiec, A. A. Shaikh, E. Hincal, A new optimal root-finding iterative algorithm: Local and semilocal analysis with polynomiography, Numer. Algorithms, 95 (2024), 1715–1745. https://doi.org/10.1007/s11075-023-01625-7 doi: 10.1007/s11075-023-01625-7
    [6] A. Naseem, M. A. Rehman, S. Qureshi, N. A. D. Ide, Graphical and numerical study of a newly developed root-finding algorithm and its engineering applications, IEEE Access, 11 (2023), 2375–2383. https://doi.org/10.1109/ACCESS.2023.3234111 doi: 10.1109/ACCESS.2023.3234111
    [7] R. Behl, A. Cordero, S. S. Motsa, J. R. Torregrosa, An eighth-order family of optimal multiple root finders and its dynamics, Numer. Algorithms, 77 (2018), 1249–1272. https://doi.org/10.1007/s11075-017-0361-6 doi: 10.1007/s11075-017-0361-6
    [8] A. Soomro, A. Naseem, S. Qureshi, N. A. D. Ide, Development of a new multi-step iteration scheme for solving non-linear models with complex polynomiography, Complexity, 2022 (2022). https://doi.org/10.1155/2022/2596924
    [9] H. Ahmad, D. U. Ozsahin, U. Farooq, M. A. Fahmy, M. D. Albalwi, H. Abu-Zinadah, Comparative analysis of new approximate analytical method and Mohand variational transform method for the solution of wave-like equations with variable coefficients, Results Phys., 51 (2023), 106623. https://doi.org/10.1016/j.rinp.2023.106623 doi: 10.1016/j.rinp.2023.106623
    [10] K. K. Ali, S. Tarla, A. Yusuf, Quantum-mechanical properties of long-lived optical pulses in the fourth-order KdV-type hierarchy nonlinear model, Opt. Quant. Electron., 55 (2023), 590. https://doi.org/10.1007/s11082-023-04817-6 doi: 10.1007/s11082-023-04817-6
    [11] M. Ozisik, A. Secer, M. Bayram, A. Yusuf, T. A. Sulaiman, Soliton solutions of the (2+1)-dimensional Kadomtsev-Petviashvili equation via two different integration schemes, Int. J. Mod. Phys. B, 37 (2023), 2350212. https://doi.org/10.1142/S0217979223502120 doi: 10.1142/S0217979223502120
    [12] T. A. Sulaiman, A. Yusuf, A. S. Alshomrani, D. Baleanu, Wave solutions to the more general (2+1)-dimensional Boussinesq equation arising in ocean engineering, Int. J. Mod. Phys. B, 37 (2023), 2350214. https://doi.org/10.1142/S0217979223502144 doi: 10.1142/S0217979223502144
    [13] J. L. Hueso, E. Martínez, C. Teruel, Multipoint efficient iterative methods and the dynamics of Ostrowski's method, Int. J. Comput. Math., 96 (2019), 1687–1701. https://doi.org/10.1080/00207160.2015.1080354 doi: 10.1080/00207160.2015.1080354
    [14] R. Behl, A. Cordero, S. S. Motsa, J. R. Torregrosa, Construction of fourth-order optimal families of iterative methods and their dynamics, Appl. Math. Comput., 271 (2015), 89–101. https://doi.org/10.1016/j.amc.2015.08.113 doi: 10.1016/j.amc.2015.08.113
    [15] A. Y. Özban, B. Kaya, A new family of optimal fourth-order iterative methods for nonlinear equations, Results Control Optim., 8 (2022), 100157. https://doi.org/10.1016/j.rico.2022.100157 doi: 10.1016/j.rico.2022.100157
    [16] B. Kong-ied, Two new eighth and twelfth order iterative methods for solving nonlinear equations, Int. J. Math. Comput. Sci., 16 (2021), 333–344.
    [17] D. Herceg, D. Herceg, Eighth order family of iterative methods for nonlinear equations and their basins of attraction, J. Comput. Appl. Math., 343 (2018), 458–480. https://doi.org/10.1016/j.cam.2018.04.040 doi: 10.1016/j.cam.2018.04.040
    [18] J. Kou, Y. Li, X. Wang, Fourth-order iterative methods free from second derivative, Appl. Math. Comput., 184 (2007), 880–885. https://doi.org/10.1016/j.amc.2006.05.189 doi: 10.1016/j.amc.2006.05.189
    [19] I. K. Argyros, Approximate solution of operator equations with applications, Singapore: World Scientific, 2005. https://doi.org/10.1142/5851
    [20] S. Regmi, I. K. Argyros, S. George, C. I. Argyros, Extended semilocal convergence for Chebyshev-Halley-type schemes for solving nonlinear equations under weak conditions, Contemp. Math., 4 (2023), 1–16.
    [21] I. K. Argyros, R. Behl, S. S. Motsa, Unifying semilocal and local convergence of Newton's method on Banach space with a convergence structure, Appl. Numer. Math., 115 (2017), 225–234. https://doi.org/10.1016/j.apnum.2017.01.008 doi: 10.1016/j.apnum.2017.01.008
    [22] I. K. Argyros, S. George, Ball convergence of Newton's method for generalized equations using restricted convergence domains and majorant conditions, Nonlinear Funct. Anal. Appl., 22 (2017), 485–494.
    [23] I. K. Argyros, Results on Newton methods: Part Ⅱ. perturbed Newton-like methods in generalized Banach spaces, Appl. Math. Comput., 74 (1996), 143–159. https://doi.org/10.1016/0096-3003(95)00118-2 doi: 10.1016/0096-3003(95)00118-2
    [24] I. Gościniak, K. Gdawiec, Control of dynamics of the modified Newton-Raphson algorithm, Commun. Nonlinear Sci. Numer. Simul., 67 (2019), 76–99. https://doi.org/10.1016/j.cnsns.2018.07.010 doi: 10.1016/j.cnsns.2018.07.010
    [25] I. Petković, L. Z. Rančić, Computational geometry as a tool for studying root-finding methods, Filomat, 33 (2019), 1019–1027. https://doi.org/10.2298/FIL1904019P doi: 10.2298/FIL1904019P
    [26] B. Kalantari, Polynomial root-finding and polynomiography, Singapore: World Scientific, 2009. https://doi.org/10.1142/6265
    [27] G. Ardelean, O. Cosma, L. Balog, A comparison of some fixed point iteration procedures by using the basins of attraction, Carpathian J. Math., 32 (2019), 277–284.
    [28] K. Gdawiec, W. Kotarski, A. Lisowska, On the robust Newton's method with the Mann iteration and the artistic patterns from its dynamics, Nonlinear Dyn., 104 (2021), 297–331. https://doi.org/10.1007/s11071-021-06306-5 doi: 10.1007/s11071-021-06306-5
    [29] H. Sharma, M. Kansal, R. Behl, An efficient two-step iterative family adaptive with memory for solving nonlinear equations and their applications, Math. Comput. Appl., 27 (2022), 97. https://doi.org/10.3390/mca27060097 doi: 10.3390/mca27060097
    [30] W. Y. Yang, W. Cao, J. Kim, K. W. Park, H. H. Park, J. Joung, et al., Applied numerical methods using MATLAB, 2 Eds., Hoboken: John Wiley & Sons, 2020.
    [31] J. R. Sharma, R. Sharma, N. Kalra, A novel family of composite Newton-Traub methods for solving systems of nonlinear equations, Appl. Math. Comput., 269 (2015), 520–535. https://doi.org/10.1016/j.amc.2015.07.092 doi: 10.1016/j.amc.2015.07.092
    [32] R. L. Burden, Numerical analysis, Brooks/Cole Cengage Learning, 2011.
    [33] C. Grosan, A. Abraham, A new approach for solving nonlinear equations systems, IEEE Trans. Syst. Man, Cy. A, 38 (2008), 698–714. https://doi.org/10.1109/TSMCA.2008.918599 doi: 10.1109/TSMCA.2008.918599
  • This article has been cited by:

    1. Hardik Joshi, Mehmet Yavuz, Osman Taylan, Abdulaziz Alkabaa, Dynamic analysis of fractal–fractional cancer model under chemotherapy drug with generalized Mittag-Leffler kernel, 2025, 260, 01692607, 108565, 10.1016/j.cmpb.2024.108565
    2. Hasib Khan, Jehad Alzabut, D.K. Almutairi, Applications of artificial intelligence for clusters analysis of Uranium decay via a fractional order discrete model, 2025, 13, 26668181, 101056, 10.1016/j.padiff.2024.101056
    3. Ahu Ercan, Erdal Bas, Ramazan Ozarslan, Solving Hilfer fractional dirac systems: a spectral approach, 2025, 95, 0939-1533, 10.1007/s00419-025-02767-x
    4. Ioannis K. Argyros, Sania Qureshi, Amanullah Soomro, Muath Awadalla, Ausif Padder, Michael I. Argyros, Abstract Convergence Analysis for a New Nonlinear Ninth-Order Iterative Scheme, 2025, 13, 2227-7390, 1590, 10.3390/math13101590
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(774) PDF downloads(52) Cited by(4)

Figures and Tables

Figures(4)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog