Processing math: 100%
Research article

A multi-step Ulm-Chebyshev-like method for solving nonlinear operator equations

  • Received: 01 August 2024 Revised: 21 September 2024 Accepted: 26 September 2024 Published: 09 October 2024
  • MSC : 47H30, 65H10, 65J15

  • In this paper, based on the Ulm-Chebyshev iterative procedure, we present a multi-step Ulm-Chebyshev-like method to solve systems of nonlinear equations F(x)=0,

    {yn=xnBnF(xn),zn=ynBnF(yn),xn+1=znBnF(zn),ˉBn=2BnBnAn+1Bn,Bn+1=ˉBn+ˉBn(2IAn+1ˉBn)(IAn+1ˉBn),n=0,1,2,,

    where An+1 is an approximation of the derivative F(xn+1). This method does not contain inverse operators in its expression, and does not require computing Jacobian matrices for solving Jacobian equations. We have proved that the multi-step Ulm-Chebyshev-like method converges locally to the solution with R-convergence rate 4 under appropriate conditions. Some applications are given, compared with other existing methods, where the most important features of the method are shown.

    Citation: Wei Ma, Ming Zhao, Jiaxin Li. A multi-step Ulm-Chebyshev-like method for solving nonlinear operator equations[J]. AIMS Mathematics, 2024, 9(10): 28623-28642. doi: 10.3934/math.20241389

    Related Papers:

    [1] Wei Ma, Zhenhao Li, Yuxin Zhang . A two-step Ulm-Chebyshev-like Cayley transform method for inverse eigenvalue problems with multiple eigenvalues. AIMS Mathematics, 2024, 9(8): 22986-23011. doi: 10.3934/math.20241117
    [2] Sunyoung Bu . A collocation methods based on the quadratic quadrature technique for fractional differential equations. AIMS Mathematics, 2022, 7(1): 804-820. doi: 10.3934/math.2022048
    [3] A. K. Mittal, L. K. Balyan . Chebyshev pseudospectral approximation of two dimensional fractional Schrodinger equation on a convex and rectangular domain. AIMS Mathematics, 2020, 5(3): 1642-1662. doi: 10.3934/math.2020111
    [4] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
    [5] Lin Fan, Shunchu Li, Dongfeng Shao, Xueqian Fu, Pan Liu, Qinmin Gui . Elastic transformation method for solving the initial value problem of variable coefficient nonlinear ordinary differential equations. AIMS Mathematics, 2022, 7(7): 11972-11991. doi: 10.3934/math.2022667
    [6] Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151
    [7] Shazia Sadiq, Mujeeb ur Rehman . Solution of fractional boundary value problems by ψ-shifted operational matrices. AIMS Mathematics, 2022, 7(4): 6669-6693. doi: 10.3934/math.2022372
    [8] Xiaofeng Wang, Ying Cao . A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function. AIMS Mathematics, 2023, 8(5): 12456-12471. doi: 10.3934/math.2023625
    [9] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [10] Chuanli Wang, Biyun Chen . An hp-version spectral collocation method for fractional Volterra integro-differential equations with weakly singular kernels. AIMS Mathematics, 2023, 8(8): 19816-19841. doi: 10.3934/math.20231010
  • In this paper, based on the Ulm-Chebyshev iterative procedure, we present a multi-step Ulm-Chebyshev-like method to solve systems of nonlinear equations F(x)=0,

    {yn=xnBnF(xn),zn=ynBnF(yn),xn+1=znBnF(zn),ˉBn=2BnBnAn+1Bn,Bn+1=ˉBn+ˉBn(2IAn+1ˉBn)(IAn+1ˉBn),n=0,1,2,,

    where An+1 is an approximation of the derivative F(xn+1). This method does not contain inverse operators in its expression, and does not require computing Jacobian matrices for solving Jacobian equations. We have proved that the multi-step Ulm-Chebyshev-like method converges locally to the solution with R-convergence rate 4 under appropriate conditions. Some applications are given, compared with other existing methods, where the most important features of the method are shown.



    Let X and Y be Banach spaces, DX be an open subset, and F: DXY be a nonlinear operator with the continuous Frˊechet derivative denoted by F. We consider the problem of approximating a solution x of a nonlinear equation

    F(x)=0, (1.1)

    which applies to the inverse eigenvalue problems [1,2,3,4], the generalized inverse eigenvalue problems [5], the inverse singular value problems [6,7,8], the Chandrasekhar integral equation [9], the neutral differential-algebraic equations [10,11], and so on. Without any doubt, Newton's method is the most-used iterative process to solve this problem. It is given by the algorithm:

    xn+1=xnF(xn)1F(xn),  n0

    for x0 given. This iterative process has quadratic R-order of convergence under some mild conditions [12,13,14]. For recent progress on Newton's method, one may refer to [15,16,17].

    Other methods, such as higher-order methods, also include in their expression the inverse of the operator F. To avoid this problem, Newton-type methods:

    xn+1=xnHnF(xn),

    where Hn is an approximation of F(xn)1 are considered. One of these methods was proposed by Moser in [18]. Given x0D and B0L(Y,X), the following sequences are defined:

    {xn+1=xnBnF(xn),Bn+1=2BnBnF(xn)Bn,n=0,1,2,. (1.2)

    The first equation is similar to Newton's method, but replaces the operator F(xn)1 with a linear operator Bn. The second equation is Newton's method applied to the equation

    gn=0

    where gn: L(Y,X)L(X,Y) is defined by

    gn(B)=B1F(xn).

    So {Bn} gives us an approximation of F(xn)1. It can be shown that the rate of convergence for the above scheme is (1+5)/2, provided the root of (1.1) is simple [18]. However, from a numerical perspective, this is unsatisfactory because the scheme uses the same amount of information in each step as Newton's method, but its convergence speed is not faster than the secant method. For that, in [19], Ulm proposed the following iterative method to solve nonlinear equations. Given x0D and B0L(Y,X), Ulm defines

    {xn+1=xnBnF(xn),Bn+1=2BnBnF(xn+1)Bn,n=0,1,2,. (1.3)

    Notice that, here F(xn+1) appears instead of F(xn) in (1.2). This is crucial for obtaining fast convergence. Under the classical assumption that the derivative F is Lipschitz continuous around the solution, Ulm showed that the method generates successive approximations that converge to a solution of (1.1) asymptotically as fast as Newton's method. For recent progress on Newton-Moser type method [20,21], one may refer to Moser's method [22], Ulm's method [23,24,25,26], and Ulm-like method [27,28].

    Considering the previous antecedent, in order to extend the above ideas to high-order convergent iterative methods (cubic convergence), in [29], Ezquerro and Hernández considered Chebyshev's method and proposed the following iterative method to solve nonlinear equations. Given x0D and B0L(Y,X), Ulm-Chebyshev defines

    {yn=xnBnF(xn),xn+1=ynBnF(yn),Bn+1=Bn+Bn(2IF(xn+1)Bn)(IF(xn+1)Bn),n=0,1,2,, (1.4)

    which does not use any inverse operator in its application. Ezquerro and Hernández showed that the method generates successive approximations that converge to a solution of (1.1), and has cubical convergence. Recently, some authors have employed Ulm-Chebyshev's method to solve inverse eigenvalue problems and inverse singular value problems [1,2,5,7]. There, they found that computing exactly the derivative F(xn) (n=0,1,2,) at each iteration is costly, especially in the case when the system is large.

    In order to reduce the cost of accurately calculating the derivative F(xn) (n=0,1,2,) in each iteration, motivated by the Ulm and Ulm-Chebyshev methods, we propose a multi-step Ulm-Chebyshev-like method for solving the nonlinear operator equation

    F(x)=0.

    Given x0D and B0L(Y,X), the multi-step Ulm-Chebyshev-like method is defined by

    {yn=xnBnF(xn),zn=ynBnF(yn),xn+1=znBnF(zn),ˉBn=2BnBnAn+1Bn,Bn+1=ˉBn+ˉBn(2IAn+1ˉBn)(IAn+1ˉBn),n=0,1,2,, (1.5)

    where An+1 is an approximation of the derivative F(xn+1). This method exhibits several attractive features. First, it is inverse free: we do not need to solve a linear equation at each iteration. Second, it is derivative free: we do not need to computer the Frˊechet derivative at each iteration. Third, in addition to solving the nonlinear Eq (1.1), the method produces successive approximations {Bn} to the value of F(x)1, having x as a solution of (1.1). This property is very helpful, especially when one investigates the sensitivity of the solution to small perturbations. Fourth, the method converges to the solution with R-convergence rate 4.

    Further more, in Section 2, we analyze the local convergence of the new iterative method. Under certain assumptions, the radius of the convergence ball for the multi-step Ulm-Chebyshev-like method is estimated, and the R-convergence rate 4 of the multi-step Ulm-Chebyshev-like method is proved. Section 3 is devoted to showing some of the most important features of the new iterative method by means of five examples. Compared with other existing methods, the proposed method has higher convergence order and/or requires less operations.

    Let B(x,r) stand for the open ball in X with center x and radius r>0. Let xD be a solution of the nonlinear Eq (1.1) such that F(x) is invertible and that F satisfies the Lipschitz condition on B(x,r) with the Lipschitz constant L:

    F(x)F(y)Lxy   for  x,yB(x,r). (2.1)

    Let {xn} be generated by the multi-step Ulm-Chebyshev-like method. Let An be an approximation of the derivative F(xn) such that

    AnF(xn)ηnF(xn),n=0,1,2,, (2.2)

    where {ηn} is a nonnegative-valued sequence satisfying supn0ηnη where η is a nonnegative constant. Let

    0<rL<min{1, r, 1(L2η+L+ηF(x))F(x)1}, (2.3)
    μ=F(x)11(L2η+L+ηF(x))F(x)1rL,γ1=η(L2+F(x)),γ2=1+2μγ1,γ3=γ2+Lμ,γ4=γ2+2μL+2μLγ3,γ5=γ4+2μL+2μLγ3,γ6=1+4μγ1+4μL,0<αmin{1, 1γ3γ4+Lμγ23, 1γ5+Lμ, 1γ66},0<βmin{rL, α},0<ξβ. (2.4)

    The following lemma is crucial for the proof of the main theorem.

    Lemma 2.1. ([27]) If xnB(x,rL), then the following assertions hold:

    AnF(xn)η(L2+F(x))xnx

    and An is invertible and A1nμ.

    Note that in the multi-step Ulm-Chebyshev-like method, sequence {Bn} is generated by the algorithm except for B0. Below, we prove that if B0 approximates A10, then the sequence {xn} generated by the multi-step Ulm-Chebyshev-like method converges locally to x with R-convergence rate 4. For this end, let B0 satisfy that

    IB0A0ξ, (2.5)

    where ξ is defined in (2.4).

    Theorem 2.1. Suppose that the Jacobian matrix F(x) is invertible and that F satisfies the Lipschitz condition (2.1) on B(x,rL). Then there exist positive numbers β and ξ such that for any x0B(x,β) and B0 satisfying (2.5), the sequence {xn} generated by the multi-step Ulm-Chebyshev-like method with initial point x0 converges to x. Moreover, the following estimates hold for each n=0,1,.

    xnxα(βα)4n (2.6)

    and

    IBnAnα(βα)4n, (2.7)

    where α and β are defined in (2.4).

    Proof. We proceed by mathematical induction. Clearly, (2.6) is trivial for n=0 by the assumption. By (2.4) and (2.5), we obtain

    α1andIB0A0ξβ.

    That is, (2.7) holds for n=0. Now we assume that (2.6) and (2.7) hold for n=m. Then one has

    xmxα(βα)4m (2.8)

    and

    IBmAmα(βα)4m. (2.9)

    By (2.4), we get

    xmxα(βα)4m<β<rL.

    It follows from (2.4), (2.8), and Lemma 2.1 that

    AmF(xm)η(L2+F(x))xmxη(L2+F(x))α(βα)4m:=γ1α(βα)4m (2.10)

    and

    A1mμ.

    Then

    BmBmAmA1m(1+IBmAm)A1mμ[1+α(βα)4m]2μ (2.11)

    and

    IBmF(xm)IBmAm+BmAmF(xm)α(βα)4m+2μγ1α(βα)4m(1+2μγ1)α(βα)4m:=γ2α(βα)4m. (2.12)

    By (1.5), we have

    ymx=xmxBm(F(xm)F(x))=xmx10BmF(xθm)(xmx)dθ=10[IBmF(xm)+Bm(F(xm)F(xθm))](xmx)dθ,

    where

    xθm=x+θ(xmx)

    for 0θ1. Since

    xmxrL

    and

    xθmx=θxmxxmxrL,

    it follows from (2.4), (2.8), (2.11), (2.12), and the Lipschitz condition that

    ymx10(IBmF(xm)+L(1θ)Bmxmx)xmxdθ=IBmF(xm)xmx+L2Bmxmx2γ2α(βα)4mα(βα)4m+Lμ(α(βα)4m)2=(γ2+Lμ)α2(βα)2×4m:=γ3α2(βα)2×4m, (2.13)

    which together with (2.4), (2.8), and α1 gives

    xmymxmx+ymxα(βα)4m+γ3α2(βα)2×4m(1+γ3)α(βα)4m. (2.14)

    It follows from (2.11), (2.12), and the Lipschitz condition that

    IBmF(ym)IBmF(xm)+BmF(xm)F(ym)γ2α(βα)4m+2μL(1+γ3)α(βα)4m(γ2+2μL+2μLγ3)α(βα)4m:=γ4α(βα)4m. (2.15)

    Similar to (2.13)–(2.15) and by (2.4) and α1, we have

    zmxIBmF(ym)ymx+L2Bmymx2γ4α(βα)4mγ3α2(βα)2×4m+L2×2μγ23α4(βα)4×4m(γ3γ4+Lμγ23)α3(βα)3×4mα2(βα)3×4m, (2.16)
    ymzmymx+zmxγ3α2(βα)2×4m+α2(βα)3×4m(1+γ3)α2(βα)2×4m (2.17)

    and

    IBmF(zm)IBmF(ym)+BmF(ym)F(zm)γ4α(βα)4m+2μL(1+γ3)α2(βα)2×4m(γ4+2μL+2μLγ3)α(βα)4m:=γ5α(βα)4m. (2.18)

    By (2.4), (2.16), (2.18), and α1, we get

    xm+1xIBmF(zm)zmx+L2Bmzmx2γ5α(βα)4mα2(βα)3×4m+L2×2μα4(βα)6×4m(γ5+Lμ)α2(βα)4m+1α(βα)4m+1. (2.19)

    Consequently, (2.6) holds for n=m+1, and by (2.4), (2.8), and (2.19), we get

    xm+1xmxm+1x+xmxα(βα)4m+1+α(βα)4m2α(βα)4m. (2.20)

    By (2.4), we obtain

    xm+1xα(βα)4m+1<β<rL,

    and it follows from (2.4), (2.8), and Lemma 2.1 that

    Am+1F(xm+1)γ1α(βα)4m+1. (2.21)

    Together with (2.1), (2.4), (2.10), and (2.20), we have

    Am+1AmAm+1F(xm+1)+F(xm+1)F(xm)+AmF(xm)γ1α(βα)3m+1+2Lα(βα)4m+γ1α(βα)4m(2γ1+2L)α(βα)4m, (2.22)

    which follows from (2.9) and (2.11), and we get

    IBmAm+1IBmAm+BmAm+1Amα(βα)4m+2μ(2γ1+2L)α(βα)4m(1+4μγ1+4μL)α(βα)4m:=γ6α(βα)4m. (2.23)

    From the fourth equation in (1.5), we obtain

    IˉBmAm+1=(IBmAm+1)2,

    which together with (2.23) gives

    IˉBmAm+1IBmAm+12γ26α2(βα)2×4m. (2.24)

    Notice that

    Bm+1=ˉBm+ˉBm(2IAm+1)ˉBm)(IAm+1ˉBm),

    and we have

    IBm+1Am+1=I(ˉBm+ˉBm(2IAm+1)ˉBm)(IAm+1ˉBm))Am+1=(IˉBmAm+1)3.

    It follows from (2.4), (2.24), and α1 that

    IBm+1Am+1IˉBmAm+12γ66α6(βα)6×4mγ66α2(βα)4m+1α(βα)4m+1. (2.25)

    This confirms that (2.7) holds for n=m+1 and the proof is complete.

    Remark 2.1. Under the conditions as in Theorem 2.2, the sequence xk converges to the limit x with R-convergence rate 4.

    In this section, we report the numerical performance of the multi-step Ulm-Chebyshev-like method for solving the nonlinear operator Eq (1.1). We compare the multi-step Ulm-Chebyshev-like method (Algorithm MSUCL) with the Newton-type method (Algorithm NT) in [16], the Chebyshev-like method (Algorithm CL) in [30], the Ulm-like method (Algorithm UL) in [27], and the Ulm-Chebyshev method (Algorithm UC) in [29]. All tests were carried out in MALAB 7.10 running on a PC Intel Pentium IV with a 3.0 GHz CPU.

    Example 3.1. ([30]) We consider the following system of 3 nonlinear equations:

    {cos(x2)sin(x1)=0,xx131x2=0,exp(x1)x23=0.

    The Jacobian is given by

    J(x)=(cos(x1)sin(x2)0xx13ln(x3)1x22xx13x1x3exp(x1)02x3)

    and

    x=(0.90956949452004, 0.66122683227485, 1.5758341439070)T

    correct to 14 decimal places in this case. We choose the starting vector

    x0=(1, 0.5, 1.5)T

    for n=0,1,,ηn=110. For all algorithms, the stopping tolerance for the iterations is 1012.

    From Table 1, we observe that the Algorithm MSMCL converges to the solution with R-convergence rate 4, the Algorithm UL converges quadratically, and the Algorithm NT, the Algorithm UL, and the Algorithm UC converge cubically in the root sense.

    Table 1.  Values of xkx for Example 3.1.
    It. Algorithm NT Algorithm UL Algorithm CL Algorithm UC Algorithm MSUCL
    0 2.00e1 2.00e1 2.00e1 2.00e1 2.00e1
    1 4.26e3 5.29e2 7.54e3 2.22e3 1.18e4
    2 9.56e10 9.28e5 2.14e10 1.26e10 8.99e16
    3 3.25e29 5.29e10 5.27e30 4.56e30
    4 5.26e21

     | Show Table
    DownLoad: CSV

    Example 3.2. ([27]) We next consider the two-point boundary value problem

    {x+x2=0, x(0)=x(1)=0. (3.1)

    We divide the interval [0,1] into m+1 subintervals and we get

    h=1/m+1.

    Let d0,d1,,dm+1 be the points of subdivision with

    0<d0<d1<<dm+1=1.

    An approximation for the second derivative may be chosen as

    {xi=xi12xi+xi+1h2,x0=x1=0, xi=x(di)  for  i=1,2,,m. (3.2)

    Let the operator ϕ: RmRm be defined by

    ϕ(x)=(x21,x22,,x2m)T  for  x=(x1,x2,,xm)TRm.

    To get an approximation to the solution of (3.1), we need to solve the following nonlinear equation:

    F(x):=Mx+h2ϕ(x)=0,xRm,

    where

    M=(2112112112)m×m.

    Obviously, x=0 is a solution of (3.2) and

    F(x)=M+2h2diag(x1,x2,,xm).

    Hence

    F(x)=M.

    Furthermore, it is easy to verify that

    F(x)F(y)2h2xy   for  x,yRm,

    where denotes the F-norm. For different choices of m and x0, the convergence performance of the algorithm is illustrated in the following tables. Here we consider the following three problem sizes:

    (a) m=10 and x0=σ(1,1,,1)T;

    (b) m=100 and x0=σ(1,1,,1)T;

    (c) m=1000 and x0=σ(1,1,,1)T, where σ=0.2 or 0.02.

    For all algorithms, the stopping tolerance for the iterations is 1012.

    Tables 24 show that the new method has higher convergence order and from Table 5, we see that the CPU time by the Algorithm MSUCL is less than the other existing methods.

    Table 2.  Values of xkx for different ηn in case (a) for Example 3.2.
    σ It. Algorithm UL Algorithm MSUCL Algorithm UC Algorithm CL Algorithm NT
    ηn:=120 ηn:=110 ηn:=120 ηn:=110
    0.2 0 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1
    1 1.26e-2 1.26e-2 5.42e-4 4.44e-4 4.43e-4 4.26e-4 4.26e-4
    2 2.96e-5 3.1e-5 5.46e-16 5.45e-16 6.15e-12 7.11e-12 3.98e-12
    3 2.67e-10 2.68e-10 2.33e-35 3.99e-35 4.32e-35
    4 3.00e-20 3.21e-20
    0.02 0 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1 6.32e-1
    1 1.21e-4 1.21e-4 5.13e-9 5.14e-9 5.68e-7 5.69e-7 5.67e-7
    2 2.58e-9 2.62e-9 3.42e-42 3.41e-42 3.47e-22 3.48e-22 3.45e-22
    3 1.96e-18 2.24e-18

     | Show Table
    DownLoad: CSV
    Table 3.  Values of xkx for different ηn in case (b) for Example 3.2.
    σ It. Algorithm UL Algorithm MSUCL Algorithm UC Algorithm CL Algorithm NT
    ηn:=120 ηn:=110 ηn:=120 ηn:=110
    0.2 0 2.00e+0 2.00e+0 2.00e+0 2.00e+0 2.00e+0 2.00e+0 2.00e+0
    1 3.82e2 3.82e2 9.63e3 9.59e3 1.68e3 1.68e3 1.68e3
    2 8.87e5 8.92e5 1.77e13 2.11e13 3.67e11 3.66e11 3.68e11
    3 7.74e10 7.81e10 6.57e35 6.58e35 6.59e35
    4 8.38e20 7.32e20
    0.02 0 2.00e1 2.00e1 2.00e1 2.00e1 2.00e1 2.00e1 2.00e1
    1 4.23e4 3.68e4 5.36e8 5.37e8 5.09e6 5.08e6 5.10e6
    2 8.32e9 7.74e9 4.34e29 4.33e29 9.99e19 9.98e19 9.99e19
    3 4.45e18 5.75e18

     | Show Table
    DownLoad: CSV
    Table 4.  Values of xkx for different ηn in case (c) for Example 3.2.
    σ It. Algorithm UL Algorithm MSUCL Algorithm UC Algorithm CL Algorithm NT
    ηn:=120 ηn:=110 ηn:=120 ηn:=110
    0.2 0 6.32e+0 6.32e+0 6.32e+0 6.32e+0 6.32e+0 6.32e+0 6.32e+0
    1 1.21e1 1.20e1 9.87e3 9.88e3 5.13e3 5.12e3 5.14e3
    2 2.96e4 2.79e4 1.13e13 1.25e13 5.55e11 5.57e11 5.53e11
    3 2.98e10 2.45e10 1.03e34 1.05e34 1.01e34
    4 2.17e20 2.13e20
    0.02 0 6.32e1 6.32e1 6.32e1 6.32e1 6.32e1 6.32e1 6.32e1
    1 1.21e4 1.21e4 4.71e9 4.72e9 4.72e7 4.71e7 4.73e7
    2 2.28e9 2.43e9 4.44e33 4.45e33 4.18e20 4.19e20 4.17e20
    3 1.79e17 1.81e17

     | Show Table
    DownLoad: CSV
    Table 5.  Averaged CPU time in seconds of all algorithms for the ten tests for Example 3.2.
    n 50 100 250 500 750 1000
    Algorithm UL 0.55 2.03 7.12 17.01 40.86 118.68
    Algorithm UC 0.48 1.81 6.12 16.68 32.93 112.31
    Algorithm NT 0.42 1.79 6.02 15.38 30.03 101.11
    Algorithm CL 0.41 1.76 5.96 14.76 29.19 98.56
    Algorithm MSUCL 0.26 1.01 3.99 11.23 20.44 72.21

     | Show Table
    DownLoad: CSV

    Remark 3.1. Various indices can be employed to compare the efficiency of iterative methods. One of the most prevalent efficiency indices is introduced as follows [31,32]:

    CR2=OCCPU+IT+FE+JE,

    where OC, IT, FE, JE, and CPU are the order of convergence, number of iterations, number of total function evaluations, total number of Jacobian evaluations, and CPU time, respectively.

    We can see from Table 6 that Algorithm MSUCL is better than the other methods for solving all test problems in criteria CR2, indicating its superior performance.

    Table 6.  Values of CR2 for all algorithms to solve all test problems.
    Method Example 3.1 Case (a) in Example 3.2 Case (b) in Example 3.2 Case (c) in Example 3.2
    Algorithm UL 0.0221 0.0176 0.0112 0.0098
    Algorithm UC 0.0482 0.0158 0.0249 0.0199
    Algorithm NT 0.0398 0.0268 0.0221 0.0243
    Algorithm CL 0.0491 0.0289 0.0211 0.0252
    Algorithm MSUCL 0.0567 0.0342 0.0298 0.0337

     | Show Table
    DownLoad: CSV

    The Chandrasekhar integral equation [9] which arises from radiative transfer theory is a nonlinear integral equation which gives a full nonlinear system of equations if discretized. The Chandrasekhar integral equation is given by

    F(P, c)=0,P:[0, 1]R,

    with parameter c and the operator F as

    F(P, c)(u)=P(u)(1c210uP(v)u+vdv)1. (3.3)

    If we discretize the integral Eq (3.3) using the midpoint integration rule with n grid points:

    10f(t)dt=1nnj=1f(tj),tj=(j0.5)h,h=1n,1jn,

    we obtain the resulting system of nonlinear equations:

    Fi(P, c)=ui(1c2nnj=1tiuiti+tj)1,1in. (3.4)

    When starting with a (1,1,,1)T vector, the system (3.4) has a solution for all c(0, 1). The c were equally spaced with Δc=0.01 in the interval c(0, 1) and we choose n=100. We note that in this case the Jacobian is a full matrix for n=0,1,,ηn=110. For all algorithms, the stopping tolerance for the iterations is 1012.

    Let IterTotal denote the total number of iterations for all c considered and Iter be its mean iteration number. From Table 7, we find that the new method has the least total and mean number of iterations, which has the lowest computational cost and therefore is the most efficient of the other existing methods in terms of CPU time.

    Table 7.  Key results for the Chandrasekhar H-equation.
    Statistical data Algorithm NT Algorithm UL Algorithm CL Algorithm UC Algorithm MSUCL
    IterTotal 346 472 339 326 245
    Iter 3.47 4.72 3.39 3.25 2.44
    CPU times(s) 5248 5894 4603 4421 3199

     | Show Table
    DownLoad: CSV

    We consider the following inverse eigenvalue problem (IEP): given n+1 real symmetric n×n matrices {Ai}ni=0 and n real numbers

    λ1λ2λn,

    find a vector

    c=(c1,c2,,cn)TRn

    such that

    λi(A(c))=λi,i=1,,n,

    where

    A(c):=A0+ni=1ciAi

    and {λj(A(c))}nj=1 are the eigenvalues of A(c) with

    λ1(A(c))λ2(A(c))λn(A(c)).

    The above IEP can be represented mathematically through a set of non-linear equations:

    f(c):=(λ1(A(c))λ1,λ2(A(c))λ2,,λn(A(c))λn)T=0. (3.5)

    To further illustrate the effectiveness of the new algorithm, we present a practical engineering application in vibrations [2,33,34]. We consider the vibration of a taut string with n beads. Figure 1 shows such a model for the case n=4.

    Figure 1.  A string with n=4 beads.

    Here, we assume that the n beads are placed along the string, where the ends of the string are clamped. The mass of the jth bead is denoted by mj. The horizontal lengths between masses mj and mj+1 (and between each beads at each end and the clamped support) are set to be a constant L. The horizontal tension is set to be a constant T. Then the equation of motion is governed by

    mjyj(t)=Tyj+1yjLTyjyj1L,j=1,,n, (3.6)

    where

    y0=yn+1=0.

    That is, the ends of the string are fixed. The matrix form of (3.6) is given by:

    y(t)=CJy(t), (3.7)

    where

    y(t)=(y1(t),y2(t),,yn(t))T,   C=diag(c1,c2,,cn)

    with

    cj=TmjL,

    and J is the discrete Laplacian matrix

    J=[2112112112]Sn.

    The general solution of (3.7) is given in terms of the eigenvalue problem

    CJy=λy,

    where λ is the square of the natural frequency of the vibration system and the nonzero vector y accounts for the interplay between the masses. The inverse problem for the beaded string is to compute the masses {mj}nj=1 so that the resulting system has a prescribed set of natural frequencies. It is easy to check that the eigenvalues of J are given by:

    λj(J)=4(sinjπn+1)2,j=1,2,,n.

    Thus J is symmetric and positive definite and CJ is similar to LTCL, where L is the Cholesky factor of

    J=LLT.

    Then the inverse problem is converted into the form of the IEP where

    A0=0 and Aj=LTEjL

    with

    Ej=diag(ej)

    for j=1,2,,n. The beaded string data in Example 3.3 comes from the website: http://www.caam.rice.edu/beads.

    Example 3.3. [2]) This is an inverse problem for the beaded string with n=6 beads, where

    (m1,m2,m3,m4,m5,m6)=(0.017804,0.030783,0.017804,0.017804,0.030783,0.017804)(kg),(n+1)L=1.12395(meter),T=166.0370(Newton),λ=(9113.978,30746.32,83621.69,133310.0,148694.4,193537.0)T,c=(58081.57,33592.71,58081.57,58081.57,33592.71,58081.57)T.

    We report our numerical results for starting point

    c0=(58081,33592,58081,58081,33592,58081)T.

    We solve Example 3.3 using all algorithms, and the stopping tolerance is set to be

    ckc1012,

    for n=0,1,,ηn=110. The numerical results are listed in Tables 8 and 9. We observe from Table 8 that the proposed method has higher convergence order and/or requires less operations than the other existing methods. Table 9 displays the computed masses for the beaded string. As expected, the desired masses are recovered: http://www.caam.rice.edu/beads.

    Table 8.  Values of ckc for Example 3.3.
    It. Algorithm NT Algorithm UL Algorithm CL Algorithm UC Algorithm MSUCL
    0 9.53e1 9.53e1 9.53e1 9.53e1 9.53e1
    1 3.45e3 3.41e2 3.26e3 4.25e3 2.11e4
    2 4.25e8 5.24e3 2.14e8 8.24e8 3.66e17
    3 2.26e22 4.29e5 4.36e23 9.99e23
    4 4.11e9
    5 9.87e17

     | Show Table
    DownLoad: CSV
    Table 9.  Recovered masses for Example 3.3.
    m1 m2 m3 m4 m5 m6
    true 0.017804 0.030783 0.017804 0.017804 0.030783 0.017804
    recovered 0.017804 0.030783 0.017804 0.017804 0.030783 0.017804

     | Show Table
    DownLoad: CSV

    Example 3.4. ([1]) This is an inverse problem with n=10. Define

    A0=O,   A1=1m1e1eT1,   Ak=(1m1e11mkek)(1m1e11mkek)T,  k=2,3,,6,

    where

    m1=2, m2=m3=m4=m5=m6=0.2.

    Now

    λ=(310.2490,249.2218,28.08413,113.3087,218.7351,487.9554)T.

    Then

    c=(83.47955,53.82911,89.13261,40.82639,47.78696,21.50871)T.

    We report our numerical results for different starting points:

    ● (a) c0=(77.95824,62.08697,96.54128,40.10535,44.33137,20.79310)T,

    ● (b) c0=(76.86213,63.46336,95.28928,41.39452,42.24157,17.37889)T,

    ● (c) c0=(78.58345,65.97678,97.83621,43.47844,49.26789,23.67335)T,

    ● (d) c0=(85.47863,67.28566,80.28746,35.38552,45.45096,23.47528)T.

    Here we take

    B0=J(c0)1.

    For all algorithms, the stopping tolerance is set to be

    ckc1012.

    Table 10 displays the error of ckc for the above four initial points c0, where "It." represents the number of outer iterations, and "*" denotes that the corresponding algorithm fails to converge, respectively.

    Table 10.  Values of ckc and It., for Example 3.4.
    ini. k Algorithm NT Algorithm CL Algorithm UL Algorithm UC Algorithm MSUCL
    ηn=110 ηn=110
    (a) 0 1.29e+1 1.29e+1 1.29e+1 1.29e+1 1.29e+1
    1 1.86e+0 1.86e+0 5.23e+0 4.56e+0 6.52e2
    2 1.86e+0 1.85e+0 2.33e2 6.32e2 5.29e7
    3 1.86e+0 1.86e+0 3.21e3 9.85e5 8.88e25
    4 1.86e+0 1.86e+0 8.12e5 5.55e11
    5 1.86e+0 1.86e+0 1.98e8 6.29e23
    6 1.86e+0 1.86e+0 6.59e11
    7 1.86e+0 1.86e+0 4.22e18
    It. 7 5 3
    (b) 0 1.49e+1 1.49e+1 1.49e+1 1.49e+1 1.49e+1
    1 1.86e+0 1.86e+0 4.26e+0 7.45e+0 1.25e2
    2 1.86e+0 1.85e+0 9.84e2 8.29e2 9.99e7
    3 1.85e+0 1.86e+0 4.25e3 5.98e5 3.27e25
    4 1.86e+0 1.86e+0 9.86e5 6.15e11
    5 1.86e+0 1.86e+0 3.26e8 9.19e23
    6 1.85e+0 1.86e+0 1.29e11
    7 1.86e+0 1.85e+0 5.43e18
    It. 7 5 3
    (c) 0 1.62e+1 1.62e+1 1.62e+1 1.62e+1 1.62e+1
    1 1.86e+0 1.86e+0 5.23e+0 2.11e+0 1.98e2
    2 1.86e+0 1.85e+0 2.33e2 5.37e2 2.48e7
    3 1.85e+0 1.86e+0 3.21e3 1.11e5 4.16e25
    4 1.86e+0 1.85e+0 8.12e5 2.51e11
    5 1.86e+0 1.86e+0 1.98e8 3.26e23
    6 1.86e+0 1.86e+0 6.59e11
    7 1.86e+0 1.86e+0 4.22e18
    It. 7 5 3
    (d) 0 1.74e+1 1.74e+1 1.74e+1 1.74e+1 1.74e+1
    1 1.86e+0 1.86e+0 4.25e+0 3.22e+0 3.21e2
    2 1.86e+0 1.85e+0 5.12e2 4.59e2 5.29e7
    3 1.86e+0 1.85e+0 1.23e3 4.88e5 9.99e25
    4 1.86e+0 1.86e+0 5.56e5 9.87e13
    5 1.85e+0 1.86e+0 8.45e8
    6 1.86e+0 1.86e+0 3.29e13
    7 1.86e+0 1.86e+0
    It. 6 4 3

     | Show Table
    DownLoad: CSV

    We see from Table 10 that for these choices of the initial points, Algorithm UL, Algorithm UC, and Algorithm MSUCL converge but Algorithm NT and Algorithm CL do not, and Algorithm MSUCL needs less iterations than Algorithm UL and Algorithm UC.

    In this paper, we have proposed a multi-step Ulm-Chebyshev-like method for solving nonlinear operator equations, which does not contain inverse operators in its expression, and does not require computing Jacobian matrices for solving Jacobian equations. We prove that the multi-step Ulm-Chebyshev-like method converges locally to the solution with R-convergence rate 4 under appropriate conditions. As an application, it is demonstrated how this result can be used to analyze the Chandrasekhar integral equation and to solve the inverse eigenvalue problems. The proposed method has higher convergence order and/or requires less operations than the other existing methods, indicating its superior performance. The study of the stability analysis of our new method is also one for our future work.

    Wei Ma: algorithms, software, numerical examples, writing–original draft, writing–review and editing; Ming Zhao: algorithms, software, numerical examples, writing–original draft, writing–review and editing; Jiaxin Li: algorithms, software, numerical examples, writing–original draft, writing–review and editing. All authors of this article have been contributed equally. All authors have read and approved the final version of the manuscript for publication.

    This work was supported by the Henan Province College Student Innovation and Entrepreneurship Training Program Project (No. 202410481003).

    The authors declare no conflicts of interest.



    [1] W. Ma, Two-step Ulm-Chebyshev-like Cayley transform method for inverse eigenvalue problems, Int. J. Comput. Math., 99 (2022), 391–406. https://doi.org/10.1080/00207160.2021.1913728 doi: 10.1080/00207160.2021.1913728
    [2] W. Ma, Z. Li, Y. Zhang, A two-step Ulm-Chebyshev-like Cayley transform method for inverse eigenvalue problems with multiple eigenvalues, AIMS Math., 8 (2024), 22986–23011. https://doi.org/10.3934/math.20241117 doi: 10.3934/math.20241117
    [3] C. T. Wen, X. S. Chen, H. W. Sun, A two-step inexact Newton-Chebyshev-like method for inverse eigenvalue problems, Linear Algebra Appl., 585 (2020), 241–262. https://doi.org/10.1016/j.laa.2019.10.004 doi: 10.1016/j.laa.2019.10.004
    [4] Y. Wang, W. P. Shen, An extended two-step method for inverse eigenvalue problems with multiple eigenvalues, Numer. Math. Theory Methods Appl., 16 (2023), 968–992. https://doi.org/10.4208/nmtma.OA-2023-0002 doi: 10.4208/nmtma.OA-2023-0002
    [5] Y. S. Luo, W. P. Shen, An Ulm-like algorithm for generalized inverse eigenvalue problems, Numer. Algorithms, 2024. https://doi.org/10.1007/s11075-024-01845-5
    [6] W. Ma, Z. J. Bai, A regularized directional derivative-based Newton method for inverse singular value problems, Inverse Probl., 28 (2012), 125001. https://doi.org/10.1088/0266-5611/28/12/125001 doi: 10.1088/0266-5611/28/12/125001
    [7] W. Ma, Two-step Ulm-Chebyshev-like method for inverse singular value problems, Numer. Linear Algebra Appl., 29 (2022), e2440. https://doi.org/10.1002/nla.2440 doi: 10.1002/nla.2440
    [8] W. Ma, X. S. Chen, Two-step inexact Newton-type method for inverse singular value problems, Numer. Algorithms, 84 (2020), 847–870. https://doi.org/10.1007/s11075-019-00783-x doi: 10.1007/s11075-019-00783-x
    [9] C. T. Kelley, Solution of the Chandrasekhar H-equation by Newton's method, J. Math. Phys., 21 (1980), 1625–1628. https://doi.org/10.1063/1.524647 doi: 10.1063/1.524647
    [10] X. Yan, X. Qian, H. Zhang, S. Song, Numerical approximation to nonlinear delay-differential Calgebraic equations with proportional delay using block boundary value methods, J. Comput. Appl. Math., 404 (2022), 113867. https://doi.org/10.1016/j.cam.2021.113867 doi: 10.1016/j.cam.2021.113867
    [11] S. Long, Y. Zhang, S. Zhong, New results on the stability and stabilization for singular neutral systems with time delay, Appl. Math. Comput., 473 (2024), 128643. https://doi.org/10.1016/j.amc.2024.128643 doi: 10.1016/j.amc.2024.128643
    [12] B. Morini, Convergence behaviour of inexact Newton methods, Math. Comp., 68 (1999), 1605–1613. https://doi.org/10.1090/S0025-5718-99-01135-7 doi: 10.1090/S0025-5718-99-01135-7
    [13] J. A. Ezquerro, M. A. Hernández, Generalized differentiability conditions for Newton's method, IMA J. Numer. Anal., 22 (2002), 187–205. https://doi.org/10.1093/imanum/22.2.187 doi: 10.1093/imanum/22.2.187
    [14] C. Chun, Iterative methods improving Newton's method by the decomposition method, Comput. Math. Appl., 50 (2005), 1559–1568. https://doi.org/10.1016/j.camwa.2005.08.022 doi: 10.1016/j.camwa.2005.08.022
    [15] M. Frontini, E. Sormani, Some variants of Newton's method with third-order convergence, Appl. Math. Comput., 140 (2003), 419–426. https://doi.org/10.1016/S0096-3003(02)00238-2 doi: 10.1016/S0096-3003(02)00238-2
    [16] H. H. H. Homeier, On Newton-type methods with cubic convergence, J. Comput. Appl. Math., 176 (2005), 425–432. https://doi.org/10.1016/j.cam.2004.07.027 doi: 10.1016/j.cam.2004.07.027
    [17] M. T. Darvishi, A. Barati, A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput., 187 (2007), 630–635. https://doi.org/10.1016/j.amc.2006.08.080 doi: 10.1016/j.amc.2006.08.080
    [18] J. Moser, Stable and random motions in dynamical systems with special emphasis on celestial mechanics, In: H. W. Lectures, Annals of mathematics studies, Princeton University Press, 1973.
    [19] S. Ulm, On iterative methods with successive approximation of the inverse operator, Izv. Akad. Nauk Est. SSR., 16 (1967), 403–411.
    [20] O. H. Hald, On a Newton-Moser type method, Numer. Math., 23 (1975), 411–426. https://doi.org/10.1007/BF01437039 doi: 10.1007/BF01437039
    [21] H. Petzeltova, Remark on a Newton-Moser type method, Comment. Math. Univ. Carolin., 21 (1980), 719–725.
    [22] J. M. Gutirrez, M. A. Hernández, N. Romero, A note on a modification of Moser's method, J. Complexity, 24 (2008), 185–197. https://doi.org/10.1016/j.jco.2007.04.003 doi: 10.1016/j.jco.2007.04.003
    [23] A. Galperin, Z. Waksman, Ulm's method under regular smoothness, Numer. Funct. Anal. Optim., 19 (1998), 285–307.
    [24] J. A. Ezquerro, M. A. Hernández, The Ulm method under mild differentiability conditions, Numer. Math., 109 (2008), 193–207. https://doi.org/10.1007/s00211-008-0144-z doi: 10.1007/s00211-008-0144-z
    [25] I. K. Argyros, On Ulm's method using divided differences of order one, Numer. Algorithms, 52 (2009), 295–320. https://doi.org/10.1007/s11075-009-9274-3 doi: 10.1007/s11075-009-9274-3
    [26] I. K. Argyros, On Ulm's method for Fréchet differentiable operators, J. Appl. Math. Comput., 31 (2009), 97–111. https://doi.org/10.1007/s12190-008-0194-5 doi: 10.1007/s12190-008-0194-5
    [27] W. P. Shen, T. T. Wei, L. H. Peng, An Ulm-like method for solving nonlinear operator equations, J. Nonlinear Convex Anal., 16 (2015), 1439–1447.
    [28] W. P. Shen, T. T. Wei, S. Guu, Convergence of the Ulm-like method under the Hölder condition, J. Nonlinear Convex Anal., 17 (2016), 701–710.
    [29] J. A. Ezquerro, M. A. Hernández, An Ulm-type method with R-order of convergence three, Nonlinear Anal., 13 (2012), 14–26. https://doi.org/10.1016/j.nonrwa.2011.07.039 doi: 10.1016/j.nonrwa.2011.07.039
    [30] D. K. R. Babajee, M. Z. Dauhooa, M. T. Darvishi, A. Karami, A. Barati, Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math., 233 (2010), 2002–2012. https://doi.org/10.1016/j.cam.2009.09.035 doi: 10.1016/j.cam.2009.09.035
    [31] R. H. Al-Obaidi, M. T. Darvishi, A comparative study on qualification criteria of nonlinear solvers with introducing some new ones, J. Math., 2022 (2022), 4327913. https://doi.org/10.1155/2022/4327913 doi: 10.1155/2022/4327913
    [32] R. Erfanifar, M. Hajarian, A new multi-step method for solving nonlinear systems with high efficiency indices, Numer. Algorithms, 97 (2024), 959–984. https://doi.org/10.1007/s11075-023-01735-2 doi: 10.1007/s11075-023-01735-2
    [33] M. T. Chu, Inverse eigenvalue problems, SIAM Rev., 40 (1998), 3984. https://doi.org/10.1137/S0036144596303984 doi: 10.1137/S0036144596303984
    [34] M. T. Chu, G. H. Golub, Structured inverse eigenvalue problems, Acta Numer., 11 (2002), 1–71. https://doi.org/10.1017/S0962492902000016 doi: 10.1017/S0962492902000016
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(897) PDF downloads(41) Cited by(0)

Figures and Tables

Figures(1)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog