Research article

A single parameter fourth-order Jarrat-type iterative method for solving nonlinear systems

  • Received: 29 December 2024 Revised: 14 March 2025 Accepted: 28 March 2025 Published: 03 April 2025
  • MSC : 65B99, 65H05

  • In this paper, a Jarrat-type iterative method for solving nonlinear systems is proposed through the weight function method. It is proved that the convergence order of the iterative method reaches fourth order in Banach space. The stability of this fourth-order iterative method was analyzed using real dynamics theory. The number of fixed points and critical points was solved using real dynamics tools, and the dynamic plane was drawn. Through the dynamic plane, it can be observed that the stability of the iterative method is best when the parameters are 0β<4238. Next, we compare the efficiency of the proposed method with two fourth-order iterative methods. The results show that when β=98, our method shows better efficiency in terms of calculation time, calculation error, and calculation efficiency. In addition, we also use this method to successfully solve the Hammerstein-type integral equation, boundary value problem, heat conduction problem in partial differential equation, and other nonlinear equation systems. The experimental results are consistent with the theoretical analysis, which further confirms the accuracy of the method.

    Citation: Jia Yu, Xiaofeng Wang. A single parameter fourth-order Jarrat-type iterative method for solving nonlinear systems[J]. AIMS Mathematics, 2025, 10(4): 7847-7863. doi: 10.3934/math.2025360

    Related Papers:

    [1] Xiaofeng Wang, Mingyu Sun . A new family of fourth-order Ostrowski-type iterative methods for solving nonlinear systems. AIMS Mathematics, 2024, 9(4): 10255-10266. doi: 10.3934/math.2024501
    [2] Xiaofeng Wang, Yufan Yang, Yuping Qin . Semilocal convergence analysis of an eighth order iterative method for solving nonlinear systems. AIMS Mathematics, 2023, 8(9): 22371-22384. doi: 10.3934/math.20231141
    [3] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [4] Shuqi He, Kun Wang . Exponential integrator method for solving the nonlinear Helmholtz equation. AIMS Mathematics, 2022, 7(9): 17313-17326. doi: 10.3934/math.2022953
    [5] Qasem M. Tawhari . Advanced analytical techniques for fractional Schrödinger and Korteweg-de Vries equations. AIMS Mathematics, 2025, 10(5): 11708-11731. doi: 10.3934/math.2025530
    [6] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [7] Amir Khan, Abdur Raouf, Rahat Zarin, Abdullahi Yusuf, Usa Wannasingha Humphries . Existence theory and numerical solution of leptospirosis disease model via exponential decay law. AIMS Mathematics, 2022, 7(5): 8822-8846. doi: 10.3934/math.2022492
    [8] Costică Moroşanu . Modeling of the continuous casting process of steel via phase-field transition system. Fractional steps method. AIMS Mathematics, 2019, 4(3): 648-662. doi: 10.3934/math.2019.3.648
    [9] Dalal Khalid Almutairi, Ioannis K. Argyros, Krzysztof Gdawiec, Sania Qureshi, Amanullah Soomro, Khalid H. Jamali, Marwan Alquran, Asifa Tassaddiq . Algorithms of predictor-corrector type with convergence and stability analysis for solving nonlinear systems. AIMS Mathematics, 2024, 9(11): 32014-32044. doi: 10.3934/math.20241538
    [10] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
  • In this paper, a Jarrat-type iterative method for solving nonlinear systems is proposed through the weight function method. It is proved that the convergence order of the iterative method reaches fourth order in Banach space. The stability of this fourth-order iterative method was analyzed using real dynamics theory. The number of fixed points and critical points was solved using real dynamics tools, and the dynamic plane was drawn. Through the dynamic plane, it can be observed that the stability of the iterative method is best when the parameters are 0β<4238. Next, we compare the efficiency of the proposed method with two fourth-order iterative methods. The results show that when β=98, our method shows better efficiency in terms of calculation time, calculation error, and calculation efficiency. In addition, we also use this method to successfully solve the Hammerstein-type integral equation, boundary value problem, heat conduction problem in partial differential equation, and other nonlinear equation systems. The experimental results are consistent with the theoretical analysis, which further confirms the accuracy of the method.



    In the field of engineering and scientific computing, solving systems has become a core task. Iterative algorithms have become the preferred method for solving such problems due to their efficiency and flexibility in estimating solutions to nonlinear systems. The iterative method solves the nonlinear problem by gradually approaching the estimated value of the iterative adjustment solution until it converges to the exact solution or meets the predetermined error tolerance.

    The Jarrat-type iterative method has been optimized and improved based on the Newton iteration method, and is an effective method for solving nonlinear systems [1]. The classic Jarrat-type iterative method takes the form of (1.1):

    {(k)=ν(k)23[Γ(ν(k))]1Γ(ν(k)),ν(k+1)=S[Γ(ν(k))]1Γ(ν(k)), (1.1)

    where Γ is a function defined on Rn, νRn and S=[6Γ((k))2Γ(ν(k))]1[3Γ((k))+Γ(ν(k))], comparing the convergence order of Newton's iteration method and the classical Jarrat-type iteration method, it is evident that the Jarrat-type iteration method has better convergence.

    In order to further improve the convergence speed and order of the Jarrat-type iteration method, and expand their scope of use, Cordero et al. [2], denoted by S1, proposed a two-step fourth-order Jarrat-type iterative method:

    {(k)=ν(k)23[Γ(ν(k))]1[Γ(ν(k))],ν(k+1)=ν(k+1)12[I+94[Γ((k))]1[Γ(ν(k))]+34[Γ(ν(k))]1[Γ((k))]][Γ(ν(k))]1[Γ(ν(k))], (1.2)

    Francisco et al. [3] combined the Newton method with the Traub method and proposed two new methods for solving nonlinear systems, which were named S2 and S3, respectively:

    {(k)=ν(k)[Γ(ν(k))]1[Γ(ν(k))],ν(k+1)=ν(k)[I2σk]1[1σk][Γ(ν(k))]1[Γ(νk)], (1.3)

    and

    {(k)=ν(k)[Γ(ν(k))]1[Γ(ν(k))],ν(k+1)=ν(k)[2σ2k+I]1[σk+I][Γ(ν(k))]1[Γ(νk)], (1.4)

    where σk=[Γ(ν(k))]1(Γ(ν(k))[(k),ν(k);F]). Inspired by the above methods, this article proposes a fourth-order iterative algorithm based on weight functions. Compared to traditional Jarrat-type iterative methods, this new algorithm exhibits superior convergence performance in solving nonlinear system problems.

    This article proposes a single-parameter Jarrat-type iterative method for solving nonlinear systems. In Section 2, the convergence of iterative methods has been rigorously proven within the specific mathematical framework of Banach spaces. Based on the proven convergence conclusions of the iterative method, the stability of the iterative method was analyzed in depth in Section 3 of this article. At the same time, the number of fixed points and critical points was accurately calculated through corresponding mathematical operations. In Section 4, the dynamic plane under different parameters is drawn. The stability of the iterative method under different parameters is analyzed by using the real dynamics method. Determine the parameter values with better stability by comparing different attractive basins. In Section 5, the efficiency of the iterative method designed in this paper is compared with two iterative methods of the same order. In Section 6, four numerical experiments were conducted on the iterative method, and the obtained numerical results further proved that the parameter values of the iterative method have better stability under 0β<4238. Finally, in Section 7 of the article, a summary was made.

    According to [4], a new method for solving nonlinear systems has been proposed:

    {(k)=ν(k)23[Γ(ν(k))]1[Γ(ν(k))], ν(k+1)=ν(k)G(λ)[Γ(ν(k))]1[Γ(ν(k))], (2.1)

    where λ=[Γ((k))]1Γ(ν(k)). And

    G(λ)=(58β)I+βλ+β3λ1+(38β3)λ2, (2.2)

    where k=0,1,2,, and G(λ) is a weight function. In particular, when parameter β=98, the iterative method will be as shown in (1.2). When parameter β=0, the iterative method is in the form of S4:

    {(k)=ν(k)23[Γ(ν(k))]1[Γ(ν(k))], ν(k+1)=ν(k)(58+38λ2)[Γ(ν(k))]1[Γ(ν(k))], (2.3)

    and iterative method (2.1) has a fourth order of convergence, which is proved by the following theorem.

    Theorem 2.1. Let Γ:IRnRn be a sufficiently differentiable function in the open convex set I. Let us consider that νI is a root of Γ(ν)=0 and the initial estimate ν(0) is sufficiently close to ν, then the family (2.1) is fourth-order convergent, and the error equation is as follows:

    κ(k+1)=181((117+64β)C3281C2C3+9C4)(κ(k))4+O((κ(k))5), (2.4)

    where Cw=1w![Γ(ν)]1Γ(w)(ν), w2, and κ(k)=ν(k)ν is the error in k-th iteration.

    Proof. The Taylor expansion of Γ(ν(k)) and Γ(ν(k)) at point ν is as follows:

    Γ(ν(k))=Γ(ν)[κ(k)+C2(κ(k))2+C3(κ(k))3+C4(κ(k))4+C5(κ(k))5]+O((κ(k))6), (2.5)

    and its derivative is expressed as

    Γ(ν(k))=Γ(ν)[I+2C2κ(k)+3C3(κ(k))2+4C4(κ(k))3+5C5(κ(k))4]+O((κ(k))5). (2.6)

    From the equality [Γ(ν(k))]1Γ(ν(k))=I, [Γ(ν(k))]1 can be calculated:

    [Γ(ν(k))]1=[Γ(ν())]1(I+L1κ(k)+L2(κ(k))2+L3(κ(k))3+L4(κ(k))4)+O((κ(k))5), (2.7)

    where

    L1=2C2,L2=4C223C3,L3=8C32+6C2C3+6C3C24C4,
    L4=16C42+9C23+8C2C4+8C4C212C22C312C2C3C212C3C225C5.

    Using (2.4) and (2.6), we obtain

    (k)ν=13κ(k)+23C2(κ(k))243(C22C3)(κ(k))3+23(4C327C2C3+3C4)(κ(k))4+O((κ(k))5). (2.8)

    Thus,

    Γ((k))=Γ(ν)(13κ(k)+79C2(κ(k))24627C3(κ(k))3+(209C32329C2C3+16381C4)(κ(k))4)+O((κ(k))5).  (2.9)

    The first derivative of Γ((k)) can be found as

    Γ((k))=Γ(ν)(I+23C2κ(k)+(43C22+13C3)(κ(k))2+2(43C32+2C2C3227C4)(κ(k))3)+O((κ(k))4). (2.10)

    From the equality [Γ((k))]1Γ((k))=I, the inverse of Γ((k)) can be calculated:

    [Γ((k))]1=[Γ(ν())]1(I+Λ1e(k)+Λ2(κ(k))2+Λ3(κ(k))3)+O((κ(k))4), (2.11)

    where

    Λ1=23C2,Λ2=89C2213C3,Λ3=8827C32+349C2C3+43C22+13C3427C4.

    According to (2.8) and (2.9), we obtain

    λ=[Γ((k))]1Γ(ν(k))=I+43C2κ(k)+23(263C224C3)(κ(k))2+83(89C3273C2C3+139C4)(κ(k))3+O((κ(k))4). (2.12)

    Thus,

    G(λ)=I+C2κ(k)+((409β6)C22+2C3)(κ(k))2+e(β)(κ(k))3+O((κ(k))4), (2.13)

    where e(β)=2((1027β16)C3234C2C3312C4). Substituting the above equations into the family (2.1), the error equation can be obtained:

    κ(k+1)=181((117+64β)C3281C2C3+9C4)(κ(k))4+O((κ(k))5). (2.14)

    In this section of the research, the focus is on exploring the iterative method family (2.1), with the core focus on whether its stability is related to the value of parameter β. To deeply analyze this key issue, a targeted analysis method was specifically selected, which is to systematically and meticulously analyze the stability of high-order iterative methods in real dynamic planes using dimensional quadratic polynomials. It is worth emphasizing that when conducting stability analysis for the iterative method family (2.1), we strictly follow the rigorous multidimensional dynamic analysis method, which is a scientific and standardized path. Under the analytical framework constructed by this method, the determination of convergence is carried out by referring to the preliminary estimation process detailed in academic works [5,6,7], in order to ensure the logical rigor and reliability of the entire analysis process, and provide solid theoretical support for stability-related research in the iterative method family (2.1).

    Applying the n-dimensional rational vector polynomial p(ν), p:RnRn, and the iterative method (2.1), the rational function E can be obtained: RnRn, a orbit point is defined as ν(0)Rn, defining a set of R:{ν(0),E(ν(0)),E2(ν(0)),,En(ν(0))}.

    It is evident that distinct progressive dynamics are exhibited when conducting a dynamical analysis of initial points within the n-dimensional Banach space. These behaviors can be systematically categorized as follows: A point ˉν is a fixed point of E if E(ˉν)=ˉν. Particularly, if ˉν is the solution of the equation E(ˉν)ˉν, ˉν is a strange fixed point. Furthermore, a point ˉνRn satisfies Ek(ˉν)=ˉν, then Ei(ˉν)ˉν (i=1,2,,k1) is called an i-periodic point; see [8,9,10].

    Theorem 3.1. Consider the function E:RnRn. Let us presuppose that ˉν is an i-periodic point that considers the function E:RnRn. Denote the eigenvalues of the Jacobian matrix E(ˉν) by ξ1,ξ2,ξn, then:

    (1) ˉν is superattracting if |ξi|=0, for all i=1,2,,n.

    (2) ˉν is an attracting if |ξi|<1, for all i=1,2,,n.

    (3) ˉν is neutral if |ξi|=1, for all i=1,2,,n.

    (4) ˉν is repelling if |ξi|>1, for all i=1,2,,n.

    (5) ˉν is a hyperbolic point or saddle point if |ξi|1, for all i=1,2,,n.

    (6) ˉν is a superattractive point if |ξi|=0, for all i=1,2,,n.

    Allow the basin that draws the fixed point ˜ν to be the collection of points whose track tends towards ˜ν, denoted by A(˜ν),

    A(˜ν)={ν(0)Rn:Vn(ν(0))˜ν,n}. (3.1)

    Finally, critical points satisfy ri(ν)νi=0, νRn of V, for all i=1,2,,n. According to (6) in Theorem 3.1: It can be concluded that the superattractive fixed points also satisfy the definition of critical points.

    This part mainly relates the family of iterative methods (2.1) to polynomial systems. From the perspective of two-dimensional space, the graphical visualization of the basins of attraction on the real dynamic plane is analyzed. And generalize the dynamical properties of two-dimensional polynomials to n-dimensional polynomials. The system under scrutiny within the range of stability analysis is delineated by the ensuing polynomial expression:

    pj(ν)=ν2j1,i=1,2,,n. (3.2)

    By employing the family (2.1) to a quadratic polynomial p(ν), one derives vector rational operators with the ensuing formulation:

    V(ν,β)=[m1(ν,β)mn(ν,β)], (3.3)

    where

    mj(ν,β)=8β(1+ν2j)4+9ν2j(5+31ν2j+91ν4j+17ν6j)144ν3j(1+2ν2j)2,j=1,2,,n. (3.4)

    The stability of the fixed point X(ν,β) will be summarized in Theorem 3.2 next.

    Theorem 3.2. There will be a combination of V(ν,β) and the representation method (2.1), with 2n super-attractive fixed points, whose component is the root of p(ν). Let the function V(ν,β)=ν, and the real roots obtained are called fixed points. At this point, the fixed points are also the roots of p(ν) and the polynomials l(t)=8β(1+t2)39t2(5+20t2+47t4) that depend on β:

    (1) If β<0, there are two real roots, l1(β) and l2(β).

    (2) If 0β<4238, there is no real root here.

    (3) If β4238, there are two real roots, l3(β) and l4(β).

    Proof. The solution of the rational function V(ν,β)=ν is the value of a fixed point:

    (1+νj2)(8β(1+νj2)39ν2j(5+20ν2j+47νj4))144νj3(1+2νj2)2=0,j=1,2,,n. (3.5)

    It is actually solving for (1+νj2)(8β(1+νj2)39ν2j(5+20ν2j+47νj4)). Apparently, νj=±1 is a fixed-point polynomial of pj(ν)=ν2j1. Meanwhile, when t0, there are six roots on the l(t) component, but only l1(β) and l2(β) are real roots.

    Since the highest degree term of polynomial l(t) is sixth, let l1(β),l2(β),,l6(β) be the six real roots of l(t) as the root of polynomial l(t)=0, where li(β), i=1,2,,6 as the six real roots of l(t) is related to parameter β. The equilibrium of the fixed point \(V\left(\nu, \beta\right) \) is ascertained by the sum of the integral eigenvalues of the corresponding Jacobian matrix at the fixed point. Owing to the intrinsic properties of the polynomial system, the eigenvalues align with the coordinate functions of the rational operators. The equilibrium state of a fixed point V(ν,β) is determined by the sum of the integral eigenvalues of the corresponding Jacobian matrix at that fixed point. Given the inherent properties of polynomial systems, eigenvalues are obtained when the coordinate functions of rational operators exhibit relative homogeneity.

    Eigj(lj(β),,lj(β))=uj(β)8β(uj(β))39(lj(β))2αj(β)144(lj(β))3(hj(β))2, (3.6)

    where uj(β)=1+lj(β)2, αj(λ)=5+20(lj(β))2+47(lj(β))4 and hj(β)=1+2lj(β)2. In real space, consider the absolute values of fixed point eigenvalues. Therefore, fixed points with ±1 components exhibit superattractive fixed points.

    In this section, if we want to achieve optimal stability in the iterative method of dynamic analysis, we must determine the most favorable value for the parameter β. Therefore, the first-order derivative of the rational function V(ν,β) can be obtained to analyze the sum critical point of the Jacobian matrix V(ν,β). The free critical point is the root that satisfies both V(ν,β)=0 and p(ν)0 simultaneously.

    Theorem 3.3. The free critical point (cr1,cr2,cr3) constitutes a solution to the equation V(ν,β)=0, but it does not satisfy the polynomial equation p(ν)=0, that is:

    (1) If <β<0, the expressions cr1=r and cr2=r delineate the distinct components of the free critical points. Here, r signifies the positive root of the polynomial

    k(σ)=24β+(45+152β)σ+(306+16β)σ2. (3.7)

    (2) If β=0, there is a free critical point with cr3=0.

    (3) If β>0, there dose not exist free critical points.

    Proof. If V(ν,β)=0, two components can be multiplied, and the free critical point is the component other than p(ν), as follows:

    mj(νj,β)νj=(pj(ν))39ν2j(5+34ν2j)+8β(3+19ν2j+2ν4j)144ν4j(1+2ν2j)3. (3.8)

    Therefore, it is obvious that the free critical point is the root of 24β+(45+152β)σ+(306+16β)σ2, when the values of the roots are real numbers, they are the components of the critical point. Let t=σ2, and the free critical point can be defined by solving k(t).

    In order to conduct in-depth visualization analysis of the dynamic behavior of iterative algorithms, this study adopted the method of drawing the attractive basin map of the method family under different parameter values β. By careful observation on the dynamic plane, we can evaluate the stability characteristics of specific members in the iterative algorithm family. This method not only reveals the sensitivity of the algorithm to parameter changes, but also provides an intuitive basis for identifying parameter configurations with high stability. In Figure 1, we show the initial iteration set of family (2.1), which is defined by the X1-axis and the X2-axis in a two-dimensional coordinate system. On this coordinate plane, each specific point (x1,x2) represents a guess value during the initial iteration process. These points constitute the geometric representation of the initial iteration set, and their distribution on the plane provides us with an intuitive understanding of the starting point of the iteration process. Each point corresponds to a potential starting position in the iterative algorithm, from which the iterative process will evolve according to the defined mapping rules. By systematically analyzing these initial guess points, we can further explore the dynamic behavior and convergence characteristics of each member in the family (2.1). Within the framework of dynamic plane analysis, it is worth noting that the set of standard mappings encompasses all points whose trajectories tend towards attractive fixed points. Specifically, the dynamic evolution of these points will ultimately stabilize at a fixed point in the system, which is a key indicator for system stability analysis. Determine the existence of singular fixed points and the correlation between the free critical point and the value of parameter β through Theorems 3.2 and 3.3. To assess the stability across various members of the family (2.1), distinct parameter values are selected. We use graphical visualization tools to construct a 400×400 grid to display the dynamic plane, where each point on the grid is an initial estimated set (x1,x2)[2,2]×[2,2].

    Figure 1.  Dynamical planes for different values of β.

    In order to achieve intuitive visualization in the dynamic plane, we define specific convergence criteria: When the orbit of a point satisfies the condition (x1,x2)(±1,±1)<103, or when the point has not converged to the root of the polynomial p(ν) after 50 iterations, we mark the point with a white asterisk. This standard ensures that we can identify points that fail to achieve the expected convergence behavior. Furthermore, when the trajectory of a point converges to different roots of a polynomial p(ν), these points will be represented in blue, green, orange, and red on the plane to distinguish their corresponding roots. Points that fail to converge to any specific root are represented in black, which helps us to conduct detailed analysis and interpretation of dynamic behavior, as described in references [11,12,13,14]. Through this color coding, we can clearly identify points with different convergence properties, thereby gaining a deeper understanding of the structure and behavior of dynamic systems.

    Next, the attractive basin in the dynamic plane is shown in Figure 1. Figures 1(a) and 1(b) respectively represent the dynamic planes of parameters β=5 and β=1, indicating that the convergence domains in these two figures are irregular. The convergence effect is not as good as shown in Figures 1(c) and 1(d). Figure 1(c) corresponds to the dynamic plane with parameter β=1, and Figure 1(d) corresponds to the dynamic plane with parameter β=3, both of which do not have black convergence regions. The last two figures, Figures 1(e) and 1(f), correspond to the dynamic planes of parameter β=55 and parameter β=60, respectively. There are obvious black areas of non-convergence in these two planes, especially in the (x1,x2)[0.5,0.5]×[0.5,0.5] area in Figure 1(f). Therefore, according to Figure 1, when the parameters β<0 and β4238, there are other points of attraction and non-convergence regions, showing poor stability. Finally, when the parameter 0β<4238 is within the range, there is no non-convergence region and no other attraction point in the dynamic plane, and the iterative method has the best stability.

    According to the calculation cost and function evaluation times, in order to select a more effective method, we compare the efficiency of several fourth-order iterative methods proposed in this paper. We can use the efficiency index I=R1k defined by Cordero [15], where p is the order of the method and k is the number of new function evaluations required for each iteration of the method. The computational cost of solving linear equations depends on its size. Since the proposed methods can be used to solve large-scale equations, we compare the performance of these methods with the computational efficiency index introduced in [16], that is, CT=R1k+op, where OP is the number of operations per iteration (product sum quotient) and the term CL denotes the computational cost. Next, in Table 1, the calculation cost of each operation of n-dimensional linear equations is explained.

    Table 1.  Comparison of efficiency index of iterative method S1–S4.
    Method R k op CL CT
    S1 4 2n2+n 23n3+4n223n3 23n3+6n2+13n 4123n3+6n2+13n
    S2 4 32n2+n2 23n3+6n223n 23n3+152n216n 4123n3+152n216n
    S3 4 32n2+n2 23n3+8n223n 23n3+192n216n 4123n3+192n216n
    S4 4 2n2+n 23n3+4n223n 23n3+6n2+13n 4123n3+6n2+13n

     | Show Table
    DownLoad: CSV

    Figure 2 shows the efficiency index of iterative methods S1–S4 in Table 1 when n takes different values. It is obvious from Figure 2 that the efficiency of iterative methods S1 and S4 is significantly higher than that of other iterative methods of the same order. Although the efficiency of these four iterative methods decreases with the increase of the n value, the efficiency of iterative methods S1 and S4 is still higher than that of the other two iterative methods.

    Figure 2.  According to Table 1, methods S1–S4 are compared for efficiency at different n values.

    In this section of the study, we adopted the iterative method (2.1) to solve three nonlinear systems. Through this method, we conducted a detailed analysis of the computational accuracy and convergence of the iterative process under different parameter values β. This not only verifies the optimal range of parameters for the stability of the iterative method in the fourth part of the paper, but also better evaluates the performance of the iterative method (2.1). The variables iter, ν(k+1)ν(k), time, and Γ(ν(k+1)) in Tables 13 represent the number of iterations, the error value, computing time, and the residual error related to the final function value, respectively. Using the approximate computational order of convergence (ACOC) described in [17], the convergence rate of the iterative method is numerically determined to be

    ρln(ν(k+1)ν(k)/ν(k)ν(k1))ln(ν(k)ν(k1)/ν(k1)ν(k2)),k=2,3,. (6.1)

    Every numerical calculation follows the termination criteria ν(k+1)ν(k) or Γ(ν(k+1))<10100.

    Table 2.  Numerical results for Example 6.1.
    Method iter ν(k+1)ν(k) Γ(ν(k+1)) ACOC time
    M45 5 8.024e-279 1.382e-1116 4.00015 0.312
    M41 5 4.449e-259 1.217e-1036 4.00018 0.921
    M41 5 2.365e-235 1.442e-941 4.00018 0.362
    M43 5 2.980e-221 4.822e-885 4.00017 0.328
    M47 5 3.169e-203 9.208e-813 4.00017 0.593
    M455 6 2.850e-558 2.990e-2232 4.00001 0.812
    M460 6 3.212e-546 5.220e-2184 4.00001 0.812
    S1 5 2.738e-234 2.641e-937 4.00018 0.265
    S2 6 7.374e-693 5.592e-2771 4.00000 0.453
    S3 6 3.658e-392 8.622e-1179 4.00000 0.968
    S4 5 3.017e-245 3.196e-981 4.00000 0.390

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results for Example 6.2.
    Method iter ν(k+1)ν(k) Γ(ν(k+1)) ACOC time
    M45 5 1.383e-177 1.006e-713 4.00010 6.203
    M41 5 1.121e-257 1.133e-1034 4.00000 6.078
    M41 5 1.556e-191 1.439e-769 4.00004 5.859
    M43 5 2.043e-169 7.285e-681 4.00006 5.937
    M47 5 9.120e-146 5.294e-586 4.00008 6.046
    M455 6 9.410e-01 5.358e-1222 4.00012 6.750
    M460 6 9.974e-293 5.304e-1173 4.00009 7.218
    S1 5 1.103e-189 3.785e-762 4.00018 5.062
    S2 5 2.950e-225 8.306e-905 4.00000 5.765
    S3 6 2.228e-784 9.919e-01 4.00000 9.781
    S4 5 2.981e-168 2.859e-848 4.00000 6.031

     | Show Table
    DownLoad: CSV

    Example 6.1. Hammerstein-type integral equation and demonstrate the application of theoretical results. The Hammerstein-type nonlinear integral equation is a specific version of the Urysohn-type integral equation. The following is the form of the Hammerstein equation:

    ν(ι)=1+1310Θ(ι,ω)ι(ω)3dω, (6.2)

    where νC(0,1),ι,ω[0,1], with the kernel Θ as

    Θ(ι,ω)={(1ι)ω, if ωι,(1ω)ι, if ι<ω. (6.3)

    By using discretization techniques, Eq (5.1) is transformed into a nonlinear equation form for easy solution. Subsequently, this study used the Gauss-Legendre numerical integration method to approximate the integration in Eq (5.1).

    10ι(ω)dω7i=1Ψiι(ωi)3. (6.4)

    In this study, variables ωi and Ψi were designated as nodes and weights of Gaussian Legendre polynomials, respectively. By constructing a nonlinear system, we represent the approximate value of νi and estimate Eq (5.2) accordingly. Specifically, for i=1,2,,7, we use ν(ωi) to approximate νi, where νi is approximated by

    νi=1+137j=1ξijν3j, (6.5)

    where

    ξij={Ψjωj(1ωi), if ij,Ψjωi(1ωj), if i<j. (6.6)

    Rewriting the system in one method is

    Φ(ν)=ν113Khν,hν=(ν31,ν32,...,ν37)T,Φ(ν)=IKM(ν),M(ν)=diag(ν21,ν22,...,ν27). (6.7)

    According to (2.1), we will use Φ to solve the corresponding nonlinear system. This process involves calculating the derivatives of nonlinear operators and applying them to the solution process of nonlinear systems. By taking ν0=(1.5,1.5,1.5,1.5,1.5,1.5,1.5)T, we obtain numerical results in Table 2.

    Example 6.2. Boundary-value problem:

    (ϑ)=z(ϑ)2,ϑ[0,1], (6.8)
    (0)=0,(1)=1. (6.9)

    This study uses the finite difference method to numerically discretize the first and second derivatives in the discussed problem. Through this method, we convert continuous derivative operations into discrete algebraic expressions, making numerical calculations and solutions easier.

    k=k+12k+k1m2,k=1,2,3,,n1, (6.10)

    and

    k=k+1k12m,k=1,2,3,,n1. (6.11)

    In order to split the interval [0,1] into n smaller intervals, we first define the endpoints as 0=ϑ0<ϑ1<<ϑn1<ϑn=1. This splitting is regular, meaning that the length of each subinterval is equal, which is ϑk=1n, for all t. On the basis of this rule splitting, we can discover the following nonlinear systems:

    t12t+t+1m2u2k=0,k=1,2,3,,n1. (6.12)

    In practical applications, we may need to choose appropriate numerical methods and computational strategies based on the nature and requirements of specific problems to ensure that the solving process of nonlinear systems is both efficient and accurate. The solution (0.01073,0.02147,0.03220,,0.9928)T is built from the initial value ϑ(0)=(1.5,1.5,,1.5)T for n=90. Table 3 displays the numerical results.

    Example 6.3. Let us examine the ensuing system of nonlinear equations (excerpted from Grau-Sánchez), which is articulated by the function:

    G(t)=3j=1,jitjeti,1i3, (6.13)

    we choose n=3 and use ν(0) to represent the initial guess. The solution to this problem is ω(0.5671433,0.5671433,0.5671433)T, where we select the initial estimates (0.5,0.5,0.5)T. The numerical performance of the iterative method under different parameter values is shown in Table 4.

    Table 4.  Numerical results for Example 6.3.
    Method iter ν(k+1)ν(k) Γ(ν(k+1)) ACOC time
    M45 5 4.918e-192 1.575e-767 4.00000 0.078
    M41 5 6.281e-230 1.261e-919 4.000000 0.093
    M41 5 4.283e-294 4.396e-1177 4.00000 0.062
    M43 5 7.725e-240 3.816e-959 4.00000 0.062
    M47 5 3.167e-195 2.973e-780 4.00000 0.234
    M455 5 1.409e-191 1.275e-765 4.00000 0.109
    M460 5 1.189e-179 8.728e-718 4.00000 0.046
    S1 5 1.712e-252 2.351e-1200 4.00000 0.031
    S2 5 1.335e-215 2.953e-862 4.00000 0.046
    S3 5 1.013e-264 2.748e-1059 4.00000 0.078
    S4 5 5.935e-300 2.920e-1010 4.00000 0.045

     | Show Table
    DownLoad: CSV

    Example 6.4. Nonlinear PDE problem [18]:

    ηxx=ης+ηxη2+f(x,ς),x[0,1],ς0,η(0,ς)=η(1,ς)=0, (6.14)
    f(η,ς)=eς(πcos(πη)(π22)sin(πη)), (6.15)

    and the initial condition is

    u(x,0)=sin(πx). (6.16)

    In numerical analysis, this problem is discretized into a nonlinear system via the finite difference. The partial differential equation (PDE) problem under consideration is a heat conduction problem. Specifically, the spatial domain [0,1] and the temporal domain [0,T] are partitioned into N uniform subintervals, yielding step sizes of (ε=1N) in the spatial direction and (θ=TN) in the temporal direction. Let η=η(x,ς) denote the exact solution of the problem, where η(x,ς) is a function of the spatial variable (x) and the temporal variable (ς). Utilizing the finite difference method, we obtain approximations ηi,jη(xi,ςi), ηx(x,ς)η(x+ε,ς)η(xε,ς)2ε, ης(x,ς)η(x,ς)+η(x,ςθ)θ, and ηxx(x,ς)η(x+ε,ς)2η(x,ς)+η(xε,ς)ε2 at discrete points within the domain. This process results in the formation of a nonlinear system, which is presented as follows:

    (2θ+θε)ηi1,j+(4θ2ε2)ηi,j+(2θθε)ηi+1,j+2θε2η2i,j+2ε2ηi,j12θε2f(xi,ςj)=0, (6.17)

    for i=1,2,,N1 and j=1,2,,N, a series of nonlinear systems of size (N1)×(N1) are generated for each fixed j. By varying the values of N and T, the problem is solved using distinct numerical methods. The results are presented in Table 5, where iter denotes the number of iterations required for convergence, and Time represents the computational CPU time in seconds. The exact solution and the approximate solution of the problem are depicted in Figures 3 and 4, respectively, for the case where T=0.01 and N=10. Additionally, Figure 5 illustrates the absolute error for the problem.

    Table 5.  Numerical results for Example 6.4.
    Method N iter Time
    S1 10 2 1.109
    S1 20 2 3.140
    S1 30 2 11.906
    S4 10 2 1.593
    S4 20 2 3.454
    S4 30 2 14.015

     | Show Table
    DownLoad: CSV
    Figure 3.  The exact solution of S1.
    Figure 4.  The approximate solution of S1.
    Figure 5.  The absolute error of S1.

    According to the observations in Tables 15, we can conclude that the convergence order of the iterative method (2.1) can reach or approach the fourth-order convergence under different parameter values β. Especially in the range of 0<β<4238, the iterative method shows high accuracy. In order to further confirm the optimal performance parameters of the iterative method (2.1), we compared the calculation time of this method and the other two fourth-order iterative methods. The experimental results show that when β=98 is the iterative method S1, the calculation time of the iterative method is the shortest, the error is the smallest, and the calculation efficiency is the best.

    In this paper, a fourth-order Jarrat-type iterative method for solving nonlinear systems is proposed. The algorithm optimizes the iterative formula (2.1) by introducing a parametric weight function, ensuring that the iterative process can achieve fourth-order convergence regardless of the parameter value. In the stability analysis part, we calculate the fixed point and critical point, and use the critical point to analyze the dynamic plane. In numerical experiments, we solve Hammerstein-type integral equations, boundary value problems, and partial differential equations. The experimental results show that when the parameters meet 0β<4238, the iterative method shows excellent stability. In addition, through the comparative experiment with two kinds of fourth-order iterative methods, we found that the iterative method showed obvious advantages in calculation time, cost, and efficiency when the parameter β=98.

    Jia Yu: Writing–original draft, Writing–review and editing; Xiaofeng Wang: Writing-review and editing. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was supported by the Educational Commission Foundation of Liaoning Province of China (No. LJ212410167008) and the Key Project of Bohai University (No. 0522xn078).

    The authors declare no conflicts of interest.



    [1] I. Bate, K. Senapati, S. George, M. Muniyasamy, G. Chandhini, Jarratt-type methods and their convergence analysis without using Taylor expansion, Appl. Math. Comput., 487 (2025), 129112. http://doi.org/10.1016/J.AMC.2024.129112 doi: 10.1016/J.AMC.2024.129112
    [2] A. Cordero, J. A. Reyes, J. R. Torregrosa, M. P. Vassileva, Stability analysis of a new Fourth-order optimal iterative scheme for nonlinear equations, Axioms, 13 (2024), 1–24. https://doi.org/10.3390/AXIOMS13010034 doi: 10.3390/AXIOMS13010034
    [3] F. I. Chicharro, A. Cordero, N. Garrido, J. R. Torregrosa, Generalized high-order classes for solving nonlinear systems and their applications, Mathematics, 7 (2019), 1–14. https://doi.org/10.3390/math7121194 doi: 10.3390/math7121194
    [4] A. Cordero, A. Ledesma, J. G. Maimó, J. R. Torregrosa, Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations, AIMS Math., 9 (2024), 8564–8593. https://doi.org/10.3934/math.2024415 doi: 10.3934/math.2024415
    [5] Y. T. Guo, Q. S. Wu, X. F. Wang, An extension of high-order Kou's method for solving nonlinear systems and its stability analysis, Electron. Res. Arch., 33 (2025), 1566–1588. https://doi.org/10.3934/era.2025074 doi: 10.3934/era.2025074
    [6] R. A. Alharbey, M. Kansal, R. Behl, J. A. T. Machado, Efficient three-step class of eighth-order multiple root solvers and their dynamics, Symmetry, 11 (2019), 1–30. https://doi.org/10.3390/sym11070837 doi: 10.3390/sym11070837
    [7] R. Thukral, M. S. Petković, A family of three-point methods of optimal order for solving nonlinear equations, J. Comput. Appl. Math., 233 (2010), 2278–2284. http://doi.org/10.1016/j.cam.2009.10.012 doi: 10.1016/j.cam.2009.10.012
    [8] C. Khunpanuk, B. Panyanak, N. Pakkaranang, A new construction and convergence analysis of non-monotonic iterative methods for solving p-demicontractive fixed point problems and variational inequalities involving pseudomonotone mapping, Mathematics, 10 (2022), 1–29. http://doi.org/10.3390/MATH10040623 doi: 10.3390/MATH10040623
    [9] I. K. Argyros, An inverse-free Newton-Jarratt-type iterative method for solving equations under the gamma condition, J. Appl. Math. Comput., 28 (2008), 15–28. http://doi.org/10.1007/s12190-008-0065-0 doi: 10.1007/s12190-008-0065-0
    [10] X. F. Wang, N. Shang, Local convergence analysis of a novel derivative-free method with and without memory for solving nonlinear systems, Int. J. Comput. Math., 2025, 1–18. http://doi.org/10.1080/00207160.2025.2464701 doi: 10.1080/00207160.2025.2464701
    [11] D. Ruan, X. Wang, Y. Wang, Local convergence of seventh-order iterative method under weak conditions and its applications, Eng. Comput., 2025, In press.
    [12] X. F. Wang, J. Y. Xu, Conformable vector Traub's method for solving nonlinear systems, Numer. Algor., 97 (2024), 1563–1582. https://doi.org/10.1007/s11075-024-01762-7 doi: 10.1007/s11075-024-01762-7
    [13] X. F. Wang, W. S. Li, Fractal behavior of King's optimal eighth-order iterative method and its numerical application, Math. Commun., 29 (2024), 217–236.
    [14] D. D. Ruan, X. F. Wang, A high-order Chebyshev-type method for solving nonlinear equations: local convergence and applications, Electron. Res. Arch., 33 (2025), 1398–1413. https://doi.org/10.3934/era.2025065 doi: 10.3934/era.2025065
    [15] A. Cordero, J. L. Hueso, E. Martínez, J. R. Torregrosa, Efficient high-order methods based on golden ratio for nonlinear systems, Appl. Math. Comput., 217 (2011), 4548–4556. http://doi.org/10.1016/j.amc.2010.11.006 doi: 10.1016/j.amc.2010.11.006
    [16] A. Cordero, J. L. Hueso, E. Martínez, J. R. Torregrosa, A modified Newton-Jarratt's composition, Numer. Algor., 55 (2010), 88–99. https://doi.org/10.1007/s11075-009-9359-z doi: 10.1007/s11075-009-9359-z
    [17] K. Maatouk, Third order derivative free SPH iterative method for solving nonlinear systems, Appl. Math. Comput., 270 (2015), 557–566. https://doi.org/10.1016/j.amc.2015.08.083 doi: 10.1016/j.amc.2015.08.083
    [18] M. Z. Ullah, S. Serra-Capizzano, F. Ahmad, E. S. Al-Aidarous, Higher order multi-step iterative method for computing the numerical solution of systems of nonlinear equations: application to nonlinear PDEs and ODEs, Appl. Math. Comput., 269 (2015), 972–987. https://doi.org/10.1016/j.amc.2015.07.096 doi: 10.1016/j.amc.2015.07.096
  • This article has been cited by:

    1. Shobha M. Erappa, Suma P. Bheemaiah, Santhosh George, Kanagaraj Karuppaiah, Ioannis K. Argyros, On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations, 2025, 14, 2075-1680, 401, 10.3390/axioms14060401
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(404) PDF downloads(58) Cited by(1)

Figures and Tables

Figures(5)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog