1.
Introduction
The nonlinear equation is a popular topic in many research fields[1,2,3,4,5], including engineering design, physics, computational science, etc. However, with the increase in data scale and problem complexity, solving nonlinear equations has become incrementally challenging. Therefore, studying effective numerical methods to solve nonlinear equations has highly theoretical and practical significance.
We consider solving nonlinear equations
where F(x):Rn→Rm is continuously differentiable and the solution set of (1.1) is nonempty denoted by X∗. There are many numerical methods[6,7,8,9,10,11,12] to solve nonlinear equations. Among them, the Levenberg–Marquardt (LM) method[13,14] has attracted much attention by introducing the LM regularizer into the Gauss–Newton method, which enables the algorithm to be well-defined when the Jacobian is singular or close to singular. It computes the LM step ˜dk as
where Fk=F(xk) and Jk=F′(xk) is the Jacobian of F(x) at xk, λk>0 is an appropriate LM parameter updated with each iteration, and I∈Rn×n is the identity matrix. Throughout the paper, ‖⋅‖ denotes the Euclidean norm.
The choice of the LM parameter is essential for the LM method. Yamashita and Fukushima[15] proved that the LM method had the quadratic convergence rate under the local error bound condition when λk=‖Fk‖2. Fan and Yuan[16] proposed λk=‖Fk‖, which overcame the shortcoming that the LM step was too small when the iteration xk was far away from the solution. Subsequently, Fan[17] chose λk as μk‖Fk‖, in which μk was updated by a trust region technique. Amini[18] proposed the LM parameter μk‖Fk‖1+‖Fk‖, and proved the convergence under the local error bound condition. On the other hand, Ma and Jiang[19] chose the LM parameter as θ‖Fk‖+(1−θ)‖JTkFk‖ with θ∈[0,1] and obtained the quadratic convergence rate under the local error bound condition. Fan and Pan[20] proposed the LM parameter
and preserved the quadratic convergence. From this, we can find that the LM parameter is an important component of algorithm research and deserves further study.
To improve the convergence rate and efficiency of the algorithm, Fan[21] proposed the modified LM algorithm with the LM step ˜dk in (1.2) and the approximate step
where yk=xk+˜dk and λk=μk‖Fk‖δ with δ∈[1,2]. Using Jk instead of J(yk) could effectively save the calculations of the Jacobian. Under the local error bound condition, the modified LM method achieved a cubic convergence. Fan and Zeng [22] introduced a new correction step:
where λk=μk‖Fk‖δ with δ∈(0,2] and the convergence rate was min{2,1+2δ} under the same conditions. Above all, the trial step of each iteration became
and the step size was a unit. Then, Fan[23] proposed the accelerated modified LM method, which introduced a line search along ˆdk of (1.4). The step size was the solution of
By a simple derivation,
If Jkˆdk was close to 0, ˜αk would be too large. An upper bound ˆα>1 for α in (1.5) was set and the step size was chosen as αk=min(˜αk,ˆα). Moreover, the trust region ratio was introduced by
which was used to decide whether to accept the trial step and updated the parameter μk. However, the choice strategy of ˆα and its influence to the convergence of the algorithm is not mentioned. This inspires us to consider an adaptive updated strategy to the upper bound of the step size in each iteration, which enables the algorithm to preserve the cubic convergence and not increase the computational cost of the Jacobian evaluations. Note that the different choice strategy of λk also leads to the different LM method. We will propose a new LM parameter and construct a new two-step LM method with adaptive step size.
When proving the convergence rate, some problems do not satisfy the local error bound condition, but practically satisfy the H¨olderian error bound condition. Zhu et al.[24], Wang et al.[25], Zeng et al.[26], and Chen et al.[27] studied the local convergence rate of the LM method under the H¨olderian local bound condition with different LM parameters, respectively. To expand the scope and practicality of the algorithm, we devote our research to giving the global and local convergence under the H¨olderian conditions.
The aim of our research is to propose an effective accelerated adaptive two-step LM algorithm based on a modified criterion for solving nonlinear equations. The key innovations of this paper are as follows: First, we use the convex combination of ‖Fk‖1+‖Fk‖ and ‖JTkFk‖1+‖JTkFk‖ as a new LM parameter to update the trial step. Second, considering that different approximate steps may have different upper bounds, we introduce a new modified criterion to update the upper bound of the approximate step size, rather than changing at a constant. Third, the convergence of the new method is proved under the H¨olderian local error bound condition and the H¨olderian continuity of the Jacobian.
The paper is organized as follows. In next section, a new two-step LM algorithm is described and the global convergence under the H¨olderian continuity of the Jacobian is presented. In Section 3, we derive the convergence rate of the new algorithm under the H¨olderian local error bound condition and the H¨olderian continuity of the Jacobian. In Section 4, numerical experiments show that the new algorithm reduces the numbers of function and Jacobian evaluations. We conclude the paper in Section 5.
2.
Algorithm and global convergence
In this section, we propose a novel two-step LM method with a new parameter λk. The upper bound of the approximate step size is adjusted by the modified Metropolis criterion. The global convergence of the new method is proved under the H¨olderian continuity of the Jacobian which is weaker than the Lipschitz continuity.
Since the LM step ˜dk in (1.2) and the approximate step ˆdk in (1.4) rely on the choice of λk, we construct a new LM parameter
When xk is far from the optimal solution, ‖Fk‖ and ‖JTkFk‖ are large enough to make ‖Fk‖1+‖Fk‖ and ‖JTkFk‖1+‖JTkFk‖ close to 1. At this time, λk is close to μk. Conversely, when xk approaches the optimal solution, θ‖Fk‖1+‖Fk‖ and (1−θ)‖JTkFk‖1+‖JTkFk‖ degenerate into θ‖Fk‖ and (1−θ)‖JTkFk‖, which indicates that λk is close to the LM parameter mentioned in (1.3). The new LM parameter in (2.1) provides flexibility with the iteration process and enhances the performance of the LM method.
The trial step of the new method is
where αk is the step size along ˆdk. Unlike the reference [23], we will propose a new upper bound ˆαk of the step size in (1.5). Similar to the Metropolis criterion suggested by [28], we give a new modified Metropolis criterion
where 0<τ<1 represents a sufficiently small constant and Tk is the temperature decreasing to 0 as k→∞ by the cooling schedule. If |rk−1−1|≤τ, rk−1 is close enough to 1, and we set ˉαk as 1. Otherwise, |rk−1−1|>τ, we set ˉαk=e−|rk−1−1|Tk, which can be regarded as a probability and also decreases to 0 as k→∞. This is similar to the simulated annealing. We define the upper bound of the step size as ˆαk=1+ˉαk. In each iteration, ˆαk is self-adaptively updated by (2.2). Now, we set the step size along ˆdk as
where ˜αk is given by (1.6). Moreover, since ϕ(α) has the monotonically increasing property on [1,˜αk] and αk∈[1,˜αk], it is easy to find ϕ(αk)≥ϕ(1). This implies
Based on the above description, we present the accelerated adaptive two-step Levenberg–Marquardt (AATLM) algorithm.
Remark 2.1. In Step 2, ˜αk is computed by (1.6), which is proposed in [23] with Jkˆdk≠0. In [23], when Jkˆdk was close to 0, ˆα was set as the upper bound of ˜αk. However, the case of Jkˆdk=0 was not mentioned. Note that, if ˆdk≠0, then Jkˆdk≠0. In fact, if Jkˆdk=0 holds, from the definition of ˆdk, we have
Due to ˆdk being the solution of
it is easy to obtain
At this time, the left side of the above equation is 0, but the right side is larger than 0. This leads to a contradiction. Therefore, if ˆdk=0, we set sk=˜dk, and the algorithm degenerates into a general LM algorithm.
To prove the global convergence of the algorithm, we give the following assumption.
Assumption 2.1. (a) The Jacobian J(x) is H¨olderian continuous of order ν∈(0,1], i.e., there exists a positive constant κhj such that
(b) The Jacobian J(x) is bounded above, i.e., there exists a positive constant κbj such that
By using (2.8), we have
Lemma 2.1. Under the conditions of Assumption 2.1, the sequence {xk} generated by the AATLM algorithm satisfies:
for all k.
Proof. Since ˜dk is the solution of the following trust region subproblem,
for any β∈[0,1], it follows:
Then,
If ˆdk=0, (2.12) implies that the conclusion of Lemma 2.1 holds. Otherwise, ˆdk is the solution of (2.7), and it holds that
According to (2.4), we have
The conclusion follows from adding (2.11) and (2.12). □
Now, we give the global convergence of the AATLM algorithm.
Theorem 2.1. Under the conditions of Assumption 2.1, the sequence {xk} generated by the AATLM algorithm satisfies
Proof. We prove by contradiction. Suppose (2.13) is not true. There exist a positive constant δ and infinitely many k such that
Let the sets of the indices S1 and S2 be
where S1 is an infinite set. Consider the following two cases.
Case 1: S2 is finite. We have
is also finite. Let ˜k be the largest index of S3, which means xk+1=xk holds for all k∈{k>˜k|k∈S1}. Define the indicator set
We notice that ‖JTk+1Fk+1‖≥δ and xk+2=xk+1 for all k∈S4. Otherwise, if xk+2≠xk+1, then k+1∈S3, which means that ˜k is not the largest index of S3. It is easy to get k+1∈S4. By induction, ‖JTkFk‖≥δ and xk+1=xk hold for all k>˜k.
According to Step 3 in the AATLM algorithm, rk<q0 means that xk+1=xk holds for all k>˜k, and from (2.1), (2.5), and (2.6), we obtain:
which implies that
From (2.9), (2.10), (2.7), (2.15), and the definition of ˆdk, we find
for all sufficiently large k, where ˉc is a positive constant. So, we conclude
On the other hand, it is clear from (2.10) that
and
From the above formulas and Lemma 2.1, (2.10), (2.14), and (2.17), we have
which means that rk→1. According to the updating rule of μk, we know that there exists a positive constant M>m0, such that μk<M holds for all sufficiently large k, which contradicts with (2.15). Now, we point out that the assumption (2.14) is not true.
Case 2: S2 is infinite. From Lemma 2.1, (2.10), and the fact that sk is accepted by the AATLM algorithm, we have
and xk+1−xk=0 if k∉S2, which implies that
and from the definition of ˜dk, we obtain:
Similarly to (2.16) and (2.17), there exists a constant ˜c>0, which makes it true for all sufficiently large k∈S2, so,
It follows from (2.19) that
Moreover, combining with Assumption 2.1, we get
Since (2.14) holds for sufficiently large k, there exists a large ˆk, such that‖JTˆkFˆk‖≥δ, and
By induction, we find that ‖JTkFk‖≥δ2 holds for all k≥ˆk, and then, we can derive from (2.19)–(2.22) that
and thus,
Similarly, to the analysis of (2.18), we have
Therefore, there exists a positive constant M>m0 such that μk<M holds for sufficiently large k, which contradicts (2.14). Above all, we get the conclusion immediately. □
Theorem 2.1 indicates that there is x∗∈X∗ such that the sequence {xk} generated by the AATLM algorithm converges to x∗. For the sufficient large k, if xk lies in a neighborhood of x∗, then xk+1 and yk also lie in the neighborhood.
3.
Convergence rate of the AATLM algorithm
In this section, we give the properties of the trial step and the boundary of the LM parameter. In order to establish the convergence rate of the AATLM algorithm under the H¨olderian local error bound and H¨olderian continuity of the Jacobian, we use the following assumption.
Assumption 3.1. (a) F(x) provides a H¨olderian local error bound of order γ∈(0,1] in some neighborhoods of x∗∈X∗, i.e., there exist constants c>0 and 0<b<1, such that
and when γ=1, F(x) provides the local error bound.
(b) The Jacobian J(x) is H¨olderian continuous of order ν∈(0,1], i.e., there exists a constant κhj>0 such that
From (3.2), we immediately have
and there is a constant κbf>0 such that
Moreover, we denote ˉxk as the closest point to xk in X∗, i.e., dist(xk,X∗)=‖ˉxk−xk‖.
Combining the results given by Behling and Iusem[29], we assume that rank(J(ˉx))=r for all ˉx∈N(x∗,b)∩X∗. Suppose the singular value decomposition (SVD) of J(ˉxk) is
where ˉΣ1=diag(ˉσ1,...,ˉσr), and ˉσ1≥ˉσ2≥...≥ˉσr>0. Thus, we obtain:
where Σ1=diag(σ1,...,σr), σ1≥σ2≥...≥σr>0, and Σ2=diag(σr+1,...,σr+q), σr≥σr+1≥σr+2≥...≥σr+q>0. Following from the theory of matrix perturbation[30], and the H¨olderian continuity of Jk, we know
which yields
Lemma 3.1. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4), and
there exists a constant c1>0 such that
Proof. Since xk∈N(x∗,b4)={x|‖xk−x∗‖≤b4}, it follows from the definition of ˉxk that
which means ˉxk∈N(x∗,b2).
From the definition of λk, we set λ1k=μkθ‖Fk‖1+‖Fk‖, and λ2k=μk(1−θ)‖JTkFk‖1+‖JTkFk‖. Then, together with (3.1) and μk>m0, we have
As we know, ‖Fk‖2=FTkFk=FTk[F(ˉxk)+Jk(ˉxk−xk)]+FTkHk, in which Hk=Fk−F(ˉxk)−Jk(ˉxk−xk). So, we have FTkJk(ˉxk−xk)=‖Fk‖2−FTkHk. From the Assumption 3.1, and ν>2(1γ−1), it is clear that
holds for some ˆc>0. In the same way, we obtain:
Thus, we find that the LM parameter λk satisfies:
where ˊc>0.
In addition, the equivalence problem of (2.11) is
which has the optimal solution ˜dk. Combining with (3.7), we have that
holds for some c1,1>0, which means that
holds for some c1,2>0.
By the definition of ˆdk and (3.3), we obtain
By using the SVD of Jk, we have
Due to the sequence {xk} converging to the nonempty solution set X∗, if κhj‖ˉxk−xk‖ν≤ˉσr2 for any sufficiently large k, from the lower bound of λk, we get
and
where ˊc>0 is a constant. Then, combining with (3.9), (3.10), the lower bound of λk, and the range of ν, we can deduce
where ˇc>0 is a constant, and
From assumption ν>max{2(1γ−1),12γ} and the condition ν,γ∈(0,1], we know ν>1γ−1, and γ∈(23,1]. It is easy to find ν∈(12,1]. As the exponent γ increases, smaller values on the exponent ν are allowed. We obtain:
which implies
Due to the definition of sk, it is easy to know
The proof is complete. □
Lemma 3.2. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4), and
there exists a constant M>m0, such that
holds for all large k.
Proof. We consider the following two cases.
Case 1: If ‖ˉxk−xk‖≤‖˜dk‖, it follows from (3.1), (3.3), and ν>2(1γ−1) that
where c2,1>0 is a constant.
Case 2: If ‖ˉxk−xk‖>‖˜dk‖, it follows from the second and third inequalities of (3.14), that we have
Using the same analysis as (3.14) and (3.15), we deduce
where c2,2>0 is a constant with ‖ˉyk−yk‖≤‖ˆdk‖, and
holds for ‖ˉyk−yk‖>‖ˆdk‖.
Hence, it follows from (3.14)–(3.17), and the definition of Predk that
where
From (JTkFk)T˜dk<0, we can derive that ‖F(yk)‖<‖Fk‖. Combining (3.1) and (3.3) with (3.12) yields
Due to ν>max{2(1γ−1),12γ}, it is clear that rk→1. Therefore, we conclude that (3.13) is valid from Step 4 in the AATLM algorithm and Lemma 3.2 is proved. □
Lemma 3.3. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4) and ν>2(1γ−1), we have
where ˊc>0 is a constant.
Proof. It follows from (3.7) that
By using Lemma 3, (3.2), (3.4), and the definition of λk, we conclude
which means that λk is bounded. Above all, we have the conclusion immediately. □
We use the SVD to calculate the convergence rate of the AATLM algorithm. By the SVD of Jk, we get
Lemma 3.4. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4), we have
(a) ‖U1UT1Fk‖≤κbf‖ˉxk−xk‖;
(b) ‖U2UT2Fk‖≤(κhj1+ν+κhj)‖ˉxk−xk‖1+ν;
(c) ‖U3UT3Fk‖≤κhj1+ν‖ˉxk−xk‖1+ν.
Proof. We could obtain (a) directly from (3.4). Let ˉsk=−J+kFk, where J+k is the pseudo-inverse of Jk and ˉsk is the least squares solution of min‖Fk+Jks‖. Then, we obtain (c) from (3.3) that
Let ˜Jk=U1Σ1VT1 and ˜sk=−˜J+kFk, where ˜J+k is the pseudo-inverse of ˜Jk and ˜sk is the least squares solution of min‖Fk+˜Jks‖. Together with (3.4) and (3.5) implies
which means that we obtain (b) from the orthogonality of U2 and U3. The proof is complete. □
Lemma 3.5. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4) and ν>max{2(1γ−1),12γ}, we have
(a) ‖U1UT1F(yk)‖≤O(‖ˉxk−xk‖1+ν);
(b) ‖U2UT2F(yk)‖≤O(‖ˉxk−xk‖ν+γ(1+ν));
(c) ‖U3UT3F(yk)‖≤O(‖ˉxk−xk‖ν+γ(1+ν)).
Proof. From (3.21), Lemmas 3.3 and 3.4, and the range of ν, we have
and from (3.3), (3.8), and (3.23), we have
Thus, it is clear that
which indicates that the following condition of the H¨olderian local error bound,
is obtained.
Then, we let ˉpk=−J+kF(yk), and ˉpk is the least squares solution of min‖F(yk)+Jkp‖. From (3.2), (3.3), (3.9), (3.24), and the range of ν, we have
Let ˜Jk=U1Σ1VT1 and ˜pk=−˜J+kF(yk), where ˜pk is the least squares solution of min‖F(yk)+˜Jkp‖. It follows from (3.2), (3.3), (3.6), (3.8), (3.24), and the range of ν that
and then, together with the orthogonality of U2 and U3, we obtain (b) and Lemma 3.5 is proved. □
Theorem 3.1. Under the conditions of Assumption 3.1, if xk,yk∈N(x∗,b4), ν>2(1γ−1), and ν>12γ, the sequence {xk} generated by the AATLM algorithm converges to the solution set of (1.1) with order νγ+γ2(1+ν).
Proof. From (3.5), (3.20), Lemma 3.5, and the upper bound of ‖λ−1kΣ2‖, we have
It follows from (3.18), (3.22), and Lemma 3.5 that
Hence, combining with (3.8), (3.25), (3.26), and Assumption 3.1, we know
where ξ=min{ν+γ(1+ν),1+2ν,3ν+γ(1+ν)−1γ,(1+ν)2,(1+ν)(2ν+γ(1+ν)−1γ)}. Consider γ∈(23,1] and ν∈(12,1], and we have
and
By ν>12γ and γ∈(23,1], we derive
and
These mean that ξ=ν+γ(1+ν) and {xk} converges to some solution of (1.1) with the rate of νγ+γ2(1+ν).
Moreover, together with ‖ˉxk−xk‖≤‖ˉxk+1−xk‖≤‖ˉxk+1−xk+1‖+‖sk‖ and (3.27), we have
for all sufficiently large k. It is clear from Lemma 3.1 that
By the above explanation, along with the condition that ν>max{2(1γ−1),12γ}, we can conclude that Theorem 3.1 is valid. The proof is complete. □
In addition, when the values of ν and γ are different, we have convergence rates as follows
4.
Numerical experiments
This section shows the numerical results of the AATLM algorithm. All experiments were performed on a PC with an Intel i7-13700UH CPU with 32.00 GB RAM and MATLAB R2022a (64-bit).
We compare the AATLM algorithm with that of LM1 in [26], the MLM algorithm in [21], and the AMLM algorithm in [23]. Their parameters are chosen as follows:
The termination condition of the algorithm is ‖JTkFk‖≤10−6 or k≥1000. In the listed numerical results, "NF", "NJ", "NT=NF+NJ×n", "NK", and "Time" represent the numbers of functions, Jacobian evaluations, total evaluations, iterations, and CPU time, respectively. Examples 1 and 2 are two singular problems from [26]. These problems do not satisfy the local error bound condition, but satisfy the H¨olderian local error bound condition. J(x) of these problems are not Lipschitz continuous, but are H¨olderian continuous.
Example 4.1. [26] Consider the following Function 1:
The initial point is x0=(3,1,0,1)T, and the optimal solution is x∗=(0,0,0,0)T. The results are listed in Table 1.
Example 4.2. [26] Consider the following Function 2:
The initial point is x0=(3,−1,0,1)T, and the optimal solution is x∗=(0,0,0,0)T. The results are listed in Table 2.
Tables 1 and 2 show that the numbers of iterations and the function and Jacobian evaluations of the AATLM algorithm are less than that of the LM1, MLM, and AMLM algorithms. Due to the dimension of the problems being small, there is almost no difference in the CPU time.
Similar to [21,23], we also consider the following singular problem[31]
where F(x) is a nonsingular test function given by Morˊe, Garbow, and Hillstrom in [32], x∗ is the root of F(x), and A∈Rn×k has full column rank with 1≤k≤n. There exists
with rank n−k. In this paper, we define
which means that the rank of ˆJ(x∗) is n−1.
Example 4.3. [32] Consider the extended Rosenbrock function
The initial point is x0=(−1.2,1,−1.2,1,...)T, and the optimal solution is x∗=(1,1,...,1)T. The results are listed in Table 3.
Example 4.4. [32] Consider the extended Powell singular function
The initial point is x0=(3,−1,0,1,...)T, and the optimal solution is x∗=(0,0,...,0)T. The results are listed in Table 4.
Tables 3 and 4 show that the AATLM algorithm performs better than the LM1, MLM, and AMLM algorithms on the numbers of iterations and the function and Jacobian evaluations. For most problems, the AATLM algorithm has less CPU time than the other algorithms.
We tested 100 experiments and all functions are listed in Table 5. Problems 1 and 2 are Examples 1 and 2 from [26], Problems 3 and 4 are Examples 3 and 4, Problems 3–12 are from [32] and have the same form as (4.1), and Problems 13–16 are transformed from the CUTEr library in [33]. All of the test problems satisfy the assumptions required in this paper.
According to Dolan's [34] evaluation criteria, we show the performance profiles for the numbers of function evaluations, Jacobian evaluations, iterations, and CPU time of the algorithm in Figure 1. The parameter τ represents the performance ratio. When τ is close to 1 and Ψ remains constant, the numbers of Jacobian evaluations or iterations of the current algorithm are closer to the minimum value than the other algorithms. When τ is a constant and Ψ is close to 1, this means that the current algorithm can solve more problems.
It can be seen from Figure 1 that the AATLM algorithm performs better than other algorithms in the numbers of Jacobian evaluations and iterations. From Figure 1(a), the AATLM algorithm performs better than the MLM and AMLM algorithms in the number of function evaluations. Since the LM1 algorithm calculates Fk only once in each iteration, the LM1 algorithm has a higher curve in Figure 1(a) when τ∈[1,2.38]. In Figure 1(b), the AATLM algorithm can solve more testing problems with less Jacobian evaluations. When τ∈(1.49,5], the LM1 algorithm performs better than the MLM and AMLM algorithms. According to Figure 1(c), the AATLM can solve 86% of the problems with the least number of iterations, while the LM1, MLM, and AMLM can solve 12%, 32%, and 34% of the problems, respectively, which means that the AATLM algorithm could solve more problems with fewer iterations. In Figure 1(d), the LM1, MLM, AMLM, and AATLM algorithms can solve 60%, 54%, 42%, and 68% of the problems with the least CPU time, respectively. In summary, the results indicate that the AATLM algorithm is a promising method for solving nonlinear equations.
In addition, we also consider the influence of different θ on the AATLM algorithm. We show the performance profiles for the numbers of Jacobian evaluations and the iterations of the AATLM algorithm in Figure 2, where θ is chosen from the set {0.2,0.4,0.6,0.8}. We find that when θ in the AATLM algorithm is 0.6, the curve is higher than the others. This means that the new algorithm with θ=0.6 can solve more problems with fewer Jacobian evaluations and iterations.
5.
Conclusions
In this paper, we constructed a new LM parameter in the form of a convex combination to obtain the LM step and the approximate step. A new modified Metropolis criterion was introduced to update the upper bound of the approximate step size, so as to obtain an adaptive acceleration two-step LM algorithm. The global and local convergence of the new algorithm were studied under the H¨olderian local error bound condition and the H¨olderian continuity of the Jacobian, which are more general than the local error bound condition and the Lipschitz continuity of the Jacobian. The numerical results showed the efficiency of the AATLM algorithm. In the course of research, we noticed that different LM parameters could be considered at different stages of the algorithm. In future work, we will explore a new LM parameter and introduce a nonmonotone technique into the two-step LM algorithm to solve nonlinear equations.
Author contributions
Dingyu Zhu: conceptualization, writing–original draft, software, methodology, writing–review and editing; Yueting Yang: writing–original draft, supervision, funding acquisition, methodology, project administration, writing–review and editing; Mingyuan Cao: writing–original draft, supervision, funding acquisition, methodology, project administration, writing–review and editing. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The authors would like to thank the Science and Technology Development Planning Grant Program of Jilin Province (YDZJ202101ZYTS167, 20200403193SF) and the Graduate Innovation Project of Beihua University (2023005, 2023035).
Conflict of interest
The authors declare no conflicts of interest.