Research article

A derivative-free RMIL conjugate gradient method for constrained nonlinear systems of monotone equations

  • Received: 16 March 2025 Revised: 06 May 2025 Accepted: 12 May 2025 Published: 21 May 2025
  • MSC : 90C56, 90C30

  • In this paper, we introduce a derivative-free RMIL conjugate gradient method designed for nonlinear systems of monotone equations with convex constraints. The proposed method represents a refinement of the RMIL method through its integration with projection techniques and a derivative-free line search strategy. When certain reasonable assumptions are satisfied, we can prove the global convergence of the proposed method. Numerical experiments demonstrate that the method is highly effective. Moreover, we employ the method to solve sparse signal restoration problems.

    Citation: Xiaowei Fang. A derivative-free RMIL conjugate gradient method for constrained nonlinear systems of monotone equations[J]. AIMS Mathematics, 2025, 10(5): 11656-11675. doi: 10.3934/math.2025528

    Related Papers:

    [1] Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia . An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969. doi: 10.3934/math.2025365
    [2] Yan Xia, Xuejie Ma, and Dandan Li . An improved LS-RMIL-type conjugate gradient projection algorithm for systems of nonlinear equations and impulse noise image restoration. AIMS Mathematics, 2025, 10(6): 13640-13663. doi: 10.3934/math.2025614
    [3] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [4] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [5] Aliyu Muhammed Awwal, Poom Kumam, Kanokwan Sitthithakerngkiet, Abubakar Muhammad Bakoji, Abubakar S. Halilu, Ibrahim M. Sulaiman . Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application. AIMS Mathematics, 2021, 6(8): 8792-8814. doi: 10.3934/math.2021510
    [6] Abdulkarim Hassan Ibrahim, Poom Kumam, Auwal Bala Abubakar, Umar Batsari Yusuf, Seifu Endris Yimer, Kazeem Olalekan Aremu . An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration. AIMS Mathematics, 2021, 6(1): 235-260. doi: 10.3934/math.2021016
    [7] Jamilu Sabi'u, Ali Althobaiti, Saad Althobaiti, Soubhagya Kumar Sahoo, Thongchai Botmart . A scaled Polak-Ribi$ \grave{e} $re-Polyak conjugate gradient algorithm for constrained nonlinear systems and motion control. AIMS Mathematics, 2023, 8(2): 4843-4861. doi: 10.3934/math.2023241
    [8] Shousheng Zhu . Double iterative algorithm for solving different constrained solutions of multivariate quadratic matrix equations. AIMS Mathematics, 2022, 7(2): 1845-1855. doi: 10.3934/math.2022106
    [9] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [10] Wenyue Feng, Hailong Zhu . The orthogonal reflection method for the numerical solution of linear systems. AIMS Mathematics, 2025, 10(6): 12888-12899. doi: 10.3934/math.2025579
  • In this paper, we introduce a derivative-free RMIL conjugate gradient method designed for nonlinear systems of monotone equations with convex constraints. The proposed method represents a refinement of the RMIL method through its integration with projection techniques and a derivative-free line search strategy. When certain reasonable assumptions are satisfied, we can prove the global convergence of the proposed method. Numerical experiments demonstrate that the method is highly effective. Moreover, we employ the method to solve sparse signal restoration problems.



    This paper aims at finding solutions to the following constrained nonlinear monotone equations:

    F(x)=0,xΩ, (1.1)

    in which F:RnRn is monotone and continuously differentiable, and ΩRn is a nonempty closed convex set. The property of monotonicity for F is defined as

    (F(x)F(y))T(xy)0,x,yRn. (1.2)

    Nonlinear monotone equations emerge from different problems and are widely applied in interdisciplinary domains. For example, the automatic control systems [1], the compressive sensing[2,3,4], and the optimal power flow [5].

    There are numerous iterative algorithms for solving (1.1). For instance, the Newton algorithm[6], the quasi-Newton algorithm[7], the Gauss-Newton algorithm[8], and the BFGS algorithm[9]. These algorithms exhibit good convergence properties from suitable initial guesses. Nevertheless, they are unsuitable for handling large-scale nonlinear equations since the computation of the Jacobian matrix or its approximation is required in every iteration.

    Recently, some optimization methods that build on the projection and proximal point method presented by Solodov and Svaiter[10] have been extended to solve (1.1). These include conjugate gradient methods originally applied to solve large-scale unconstrained optimization problems due to the low memory and simple structures. For example, Li and Li [11] described a category of derivative-free approaches for systems of monotone equations. Cheng [12] formulated a PRP-type method using the line search strategy, and Awwal et al.[13] proposed a DFP-type derivative-free method. Ahookhosh et al.[14] developed two new derivative-free approaches for systems of nonlinear monotone equations. According to the RMIL conjugate gradient method presented by Rivaie et al [15], Fang[16,17] put forward a set of adjusted derivative-free gradient-type approaches for solving nonlinear equations. Depending on the hybrid idea and a new adaptive line search strategy, Liu et al. in [18] proposed a three-term algorithm to solve the nonlinear monotone equations under convex constraints. According to the spectral secant equation, Zhang et al.[19] developed a three-term projection method for the purpose of dealing with nonlinear monotone equations. Moreover, they utilized this method to solve the problem in signal processing. Abdullahi et al.[20,21] described a modified self-adaptive algorithm and a three-term projection algorithm for solving systems of monotone nonlinear equations. Furthermore, they employ these algorithms in signal and image recovery problems. Aji et al. [22] proposed a novel spectral algorithm using an inertial technique to solve a system of monotone equations and for its applications.

    Motivated by the aforementioned works, we give a new conjugate gradient projection algorithm for nonlinear monotone equation (1). The structure of this paper is as follows. Section 2 is dedicated to proposing the new method. In Section 3, we conduct the convergence analysis of the newly proposed method. Section 4 showcases the numerical results obtained from solving monotone nonlinear equations. Subsequently, in Section 5, we apply the proposed method to address the signal recovery problem. Finally, Section 6 presents the conclusions. The symbol stands for Euclidean norm, and 1 stands for l1 norm.

    In this part, we initially review the conjugate gradient method that is employed to solve the following unconstrained optimization problems:

    minf(x), (2.1)

    in which f is a smooth function mapping from Rn to R, and the gradient of f(xk) is symbolized as gk. The conjugate gradient method has the following iterative formula:

    xk+1=xk+αkdk, (2.2)

    in which the step length αk is given through some line search techniques, and the search direction dk is derived from

    dk={gkif k=0gk+βkdk1if k1. (2.3)

    In recent times, Rivaie et al. [15] proposed the RMIL conjugate gradient method (RMIL, named after its developers Rivaie, Mustafa, Ismail, and Leong), where the βk is computed by

    βk=gTk(gkgk1)||dk1||2. (2.4)

    Results from numerical experiments reveal that the method is highly efficient.

    Motivated by the results of the RMIL method, we develop a derivative-free approach aimed at solving nonlinear systems of monotone equations. For the sake of simplicity, we substitute gk in equations (2.3) and (2.4) with Fk, where Fk=F(xk). Moreover, we change the coefficient of gk from -1 to θk when k1. In this way, it can ensure that the search direction generated by the algorithm satisfies the property of sufficient descent, and it can effectively improve the computational efficiency of the algorithm. Consequently, the search direction dk is given by

    dk={Fkif k=0θkFk+βkdk1if k1, (2.5)

    where θk=βkFTkdk1Fk2+1, βk=FTkyk1dk12, Fk=F(xk), yk1=FkFk1. From the definitions of dk, it is not difficult to see that for kN, we have

    FTkdk=FTk(βkFTkdk1Fk2+1)Fk+FTkβkdk1=βkFTkdk1Fk2Fk2Fk2+βkFTkdk1=Fk2. (2.6)

    When k=0, we get FT0d0=F02.

    Next we state the projection operator PΩ[.], which is described as follows:

    PΩ[x]=argmin{xy|yΩ},xRn. (2.7)

    This operation refers to projecting the vector x onto the closed convex set Ω. By doing so, it is guaranteed that the subsequent iterative point produced by our algorithm remains within the domain Ω. The projection operator is known to possess the non-expansive property, namely

    PΩ[x]PΩ[y]xy,x,yRn. (2.8)

    From here on, our attention is directed towards the method for solving the monotone equations (1.1). We adopt the projection procedure presented in [10] and obtain a point

    zk=xk+αkdk, (2.9)

    with the result that

    F(zk)T(xkzk)>0, (2.10)

    where αk is obtained using a suitable line search technique.

    Meanwhile, for any x satisfying F(x)=0, owing to the monotonic property of F, we have

    F(zk)T(xzk)=(F(x)F(zk))T(xzk)0. (2.11)

    Incorporating the expression (2.10), we have the conclusion that the hyperplane

    Hk={xRn|F(zk)T(xzk)=0} (2.12)

    strictly divides the solutions of the equations (1.1) and xk. Consequently, Solodov and Svaiter [10] calculate the next iterate xk+1 through the projection of xk onto the hyperplane Hk. The xk+1 is defined as

    xk+1=xkF(zk)T(xkzk)F(zk)2F(zk). (2.13)

    Now, we give the steps of our algorithm.

    Algorithm 2.1. :

    Step 0: Choose σ>0,s>0,ϵ>0,1>ρ>0,2>γ>0. and the starting point x0Rn. Set k=0.

    Step 1: If F(xk)<ϵ, stop. Otherwise, move to step 2.

    Step 2: Define the search direction dk through (2.5).

    Step 3: Determine the steplength αk=max{sρi|i=0,1,2,} satisfies

    F(xk+αkdk)TdkσαkF(xk+αkdk)dk2, (2.14)

    then set zk=xk+αkdk and move to step 4.

    Step 4: If zkΩ and F(zk)<ϵ, set xk+1=zk, stop. Else compute

    xk+1=PΩ[xkγF(zk)T(xkzk)F(zk)2F(zk)] (2.15)

    and set k:=k+1, move to step 1.

    In this section, for the purpose of attaining the global convergence of Algorithm 2.1, the following assumptions are necessary.

    Assumption 3.1. (1) The set of solutions for problem (1.1) is nonempty.

    (2) The mapping F() is Lipschitz continuous on Rn, that is

    F(x)F(y)Lxy,x,yRn, (3.1)

    in which L represents a positive constant.

    From Assumption 3.1, it can be inferred that

    F(x)κxRn, (3.2)

    in which κ is defined as a positive constant.

    Lemma 3.1. Assume that Assumption 3.1 holds and the sequences {xk},{αk},{dk} are yielded in accordance with Algorithm 2.1. Then, for all xΩ that satisfies F(x)=0, and given that γ(0,2), we obtain

    xk+1x2xkx2γ(2γ)σ2αkdk4. (3.3)

    Furthermore, it holds that

    limk+αkdk=0. (3.4)

    Proof. Suppose F(x)=0 and xΩ; due to the monotonicity of F, we obtain

    F(zk)T(zkx)0. (3.5)

    Combining with (3.5), we get

    F(zk)T(xkx)=F(zk)T(xkzk)+F(zk)T(zkx)F(zk)T(xkzk). (3.6)

    Thus, from (2.8), (2.10), (2.14), (2.15), and (3.6), we have

    xk+1x2xkx2=PΩ[xkγF(zk)T(xkzk)F(zk)2F(zk)]x2xkx2xkγF(zk)T(xkzk)F(zk)2F(zk)x2xkx2=2γF(zk)T(xkzk)F(zk)2F(zk)T(xkx)+γ2(F(zk)T(xkzk))2F(zk)22γF(zk)T(xkzk)F(zk)2F(zk)T(xkzk)+γ2(F(zk)T(xkzk))2F(zk)2=γ(2γ)(F(zk)T(xkzk))2F(zk)2γ(2γ)σ2αkdk4. (3.7)

    Then, we can deduce that

    γ(2γ)σ2ki=0αidi4x0x2xk+1x2x0x2, (3.8)

    which together with γ(0,2), means that

    limk+αkdk=0. (3.9)

    Lemma 3.2. Assume that Assumption 3.1 holds and sequences {xk},{dk} are yielded in accordance with Algorithm 2.1.Then we have

    dkκ(1+2γLs), (3.10)

    in which κ>0,s>0,L>0,2>γ>0.

    Proof. From (2.8) and step 4 of Algorithm 2.1, we obtain

    xkxk1=PΩ[xk1γF(zk1)T(xk1zk1)F(zk1)2F(zk1)]xk1xk1γF(zk1)T(xk1zk1)F(zk1)2F(zk1)xk1=γF(zk1)T(xk1zk1)F(zk1)2F(zk1)γxk1zk1 (3.11)

    For kN, using (2.5), (3.1), (3.2), (3.11), and step 3 of Algorithm 2.1, we have

    dk=(βkFTkdk1Fk2+1)Fk+βkdk1Fk+2βkdk1Fk+2Fkdk1yk1dk12=Fk+2FkFkFk1dk1Fk+2LFkxkxk1dk1Fk+2γLFkxk1zk1dk1=Fk+2γLαk1Fkdk1dk1=Fk(1+2γLαk1)κ(1+2γLs). (3.12)

    From (2.5) and (3.2), we get

    d0=F0κ. (3.13)

    Combining with (3.7), we yield (3.10).

    Lemma 3.3. Assume that Assumption 3.1 holds and the sequences {xk}, {zk}, and {αk} are yielded in accordance with Algorithm 2.1; then we get

    αkmin{s,Fk2ρ1κ2(1+2γLs)2(L+σκ+σκρ1Ls(1+2γLs))}. (3.14)

    Proof. In the case where αks, relying on the line search technique of Algorithm 2.1, it follows that ρ1αk fails to satisfy (2.14), namely

    F(xk+ρ1αkdk)Tdk<σρ1αkF(xk+ρ1αkdk)dk2. (3.15)

    Combining with (3.1), (3.2) and (3.10), we obtain

    F(xk+ρ1αkdk)=F(xk+ρ1αkdk)F(xk)+F(xk)Lρ1αkdk+κκ(1+ρ1Ls(1+2γLs)). (3.16)

    It follows from (2.6), (3.1), (3.10), (3.15), and (3.16) that we have

    Fk2=FTkdk=[F(xk+ρ1αkdk)F(xk)]Tdk[F(xk+ρ1αkdk)]TdkLρ1αkdk2+σρ1αk(Lρ1αkdk+κ)dk2=ρ1αkdk2(L+σ(Lρ1αkdk+κ))ρ1αkκ2(1+2γLs)2(L+σκ+σκρ1Ls(1+2γLs)). (3.17)

    The above inequality shows that

    αkFk2ρ1κ2(1+2γLs)2(L+σκ+σκρ1Ls(1+2γLs)). (3.18)

    This implies (3.14).

    Theorem 3.4. Assume that Assumption 3.1 holds and the sequences {xk} and {Fk} are yielded in accordance with Algorithm 2.1. Then we obtain

    lim infkFk=0. (3.19)

    Proof. Assume that (3.19) is false. Then, there is a positive constant k for which

    Fkξ. (3.20)

    Based on (2.6) and (3.20), we obtain

    dk=FkdkFkFTkdkFkFkξ. (3.21)

    Combining with Lemma 3.3, we obtain

    αkdkmin{sξ,ξ3ρ1κ2(1+2γLs)2(L+σκ+σκρ1Ls(1+2γLs))}>0. (3.22)

    which contradicts the conclusion (3.4) of Lemma 3.1, then (3.19) holds.

    In the present section, we conduct a series of numerical experiments in order to assess the performance of Algorithm 2.1. We make a comparison between it and the three-term projection method put forward by Zhang et al.[19], the three-term conjugate method proposed by Gao et al.[23], and the PRP-like method presented by Abubakar et al.[24]. We conduct our tests using Matlab R2018a on a personal computer equipped with 32 GB of RAM and an 11th Gen Intel(R) Core(TM) i7-11700K processor.

    We test the algorithms on ten problems that have been widely used in the literature for dimensions 50000 and 200000. All test problems are initialized with the following eight starting points: x1=(10,10,,10)T, x2=(10,10,,10)T, x3=(1,1,,1)T, x4=(0.1,0.1,,0.1)T, x5=(12,122,,12n)T, x6=(1,12,,1n)T, x7=(1n,2n,,1)T, x8=(n1n,n2n,,0)T, which are derived from [17,19,23]. With respect to every algorithm and each test problem, , the program concludes when any of the three conditions described below is achieved: (1) F(xk)106, (2) F(zk)106, (3) The algorithm still fails to converge after 1000 iterations. The parameters of methods introduced by Zhang, Gao and Abubakar are set as described in their respective articles. Regarding Algorithm 2.1, we define σ=104,s=1,ϵ=105,ρ=0.55,γ=1.2. At present, we are going to list out the test problems, where the mapping F has the following definition: F(x)=(f1(x),f2(x),,fn(x))T.

    Test Problem 1 This problem sourced from [23] is

    fi(x)=xisin(|xi|1),i=1,2,,n,Ω={xRn|xi1,ni=1xin,i=1,2,,n}.

    Test Problem 2 This problem sourced from [23] is

    fi(x)=exi1,i=1,2,,n,Ω=Rn+.

    Test Problem 3 This problem sourced from [23] is

    fi(x)=2xisin(xi),i=1,2,,n,Ω=Rn+.

    Test Problem 4 This problem sourced from [19] is

    fi(x)=xi3xi(sin(xi)30.66)+2,i=1,2,,n,Ω={xRn|xi5,i=1,2,,n}.

    Test Problem 5 This problem sourced from [19] is

    fi(x)=(exi)2+3sin(xi)cos(xi)1,i=1,2,,n,Ω={xRn|xi5,i=1,2,,n}.

    Test Problem 6 This problem sourced from [19] is

    f1(x)=4x1(x21+x22)4,fi(x)=4xi(x2i1+x2i)+4xi(x2i+x2i+1)4,i=2,3,,n1,fn(x)=4xn(x2n1+x2n),Ω=Rn+.

    Test Problem 7 This problem sourced from [12] is

    f1(x)=2.5x1+x21,fi(x)=xi1+2.5xi+xi+1,i=2,3,,n1,fn(x)=xn1+2.5xn1),Ω=Rn+.

    Test Problem 8 This problem sourced from [25] is

    f1(x)=cos(x1)9+3x1+8exp(x2),fi(x)=cos(xi)9+3xi+8exp(xi1),i=2,3,,n,Ω=Rn+.

    Test Problem 9 This problem sourced from [25] is

    f1(x)=1(x1+1)2exp(xi),fi(x)=1(xi+1)2+cos(xi+1)2exp(xi),i=2,3,,n1,fn(x)=1(xn+1)2exp(xn),Ω=Rn+.

    Test Problem 10 This problem sourced from [25] is

    fi(x)=2xisin|xi|,i=1,2,,n,Ω=Rn+.

    In Tables 1, 2, and 3, we present the detailed outcomes using the format of "Niter/ Nfun/Time". Here, "Niter" represents the number of iterations, "Nfun" denotes the number of function value calculations, and "Time" stands for the CPU time counted in seconds. "Dim" indicates the dimension of test problems, and "Sta" represents the starting points. Through a comprehensive analysis of Tables 1, 2 and 3, it can be clearly observed that, for the given problems, Algorithm 2.1 registered the fewest values for Niter and Nfun in a large number of instances.

    Table 1.  The results of four algorithms for Test Problems 1, 2, 3, and 4.
    Problem Dim Sta Zhang Gao Abubakar Algorithm 2.1
    Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time
    TP1 50000 x1 18 / 53 / 0.043 12 / 34 / 0.031 10 / 39 / 0.025 18 / 53 / 0.047
    x2 18 / 54 / 0.038 11 / 33 / 0.021 10 / 39 / 0.020 18 / 54 / 0.040
    x3 17 / 52 / 0.035 10 / 30 / 0.024 9 / 38 / 0.022 17 / 52 / 0.033
    x4 14 / 42 / 0.030 11 / 33 / 0.025 10 / 40 / 0.024 14 / 42 / 0.031
    x5 17 / 51 / 0.034 11 / 33 / 0.021 36 / 179 / 0.099 17 / 51 / 0.035
    x6 17 / 51 / 0.036 11 / 33 / 0.021 39 / 194 / 0.105 18 / 54 / 0.033
    x7 21 / 58 / 0.043 14 / 39 / 0.030 48 / 228 / 0.129 22 / 61 / 0.042
    x8 21 / 58 / 0.041 14 / 39 / 0.027 48 / 228 / 0.140 22 / 61 / 0.045
    200000 x1 19 / 56 / 0.191 13 / 37 / 0.137 11 / 43 / 0.123 19 / 56 / 0.171
    x2 19 / 57 / 0.195 12 / 36 / 0.143 11 / 43 / 0.122 19 / 57 / 0.193
    x3 18 / 55 / 0.189 11 / 33 / 0.117 10 / 42 / 0.123 18 / 55 / 0.166
    x4 15 / 45 / 0.161 12 / 36 / 0.136 10 / 40 / 0.126 15 / 45 / 0.142
    x5 17 / 51 / 0.186 11 / 33 / 0.111 39 / 192 / 0.516 18 / 54 / 0.203
    x6 18 / 54 / 0.180 11 / 33 / 0.114 39 / 194 / 0.527 18 / 54 / 0.173
    x7 21 / 58 / 0.199 14 / 39 / 0.141 50 / 238 / 0.670 21 / 60 / 0.192
    x8 21 / 58 / 0.222 14 / 39 / 0.142 50 / 238 / 0.663 21 / 60 / 0.197
    TP2 50000 x1 1 / 15 / 0.009 15 / 59 / 0.046 1 / 37 / 0.017 1 / 15 / 0.007
    x2 1 / 2 / 0.002 1 / 2 / 0.002 1 / 2 / 0.002 1 / 2 / 0.002
    x3 1 / 2 / 0.002 1 / 2 / 0.002 1 / 2 / 0.001 1 / 2 / 0.001
    x4 16 / 33 / 0.021 11 / 23 / 0.018 1 / 3 / 0.001 16 / 33 / 0.020
    x5 13 / 27 / 0.012 10 / 22 / 0.010 31 / 64 / 0.028 13 / 27 / 0.013
    x6 15 / 31 / 0.021 11 / 25 / 0.015 33 / 71 / 0.041 14 / 29 / 0.016
    x7 19 / 39 / 0.026 14 / 31 / 0.020 55 / 116 / 0.082 19 / 39 / 0.022
    x8 19 / 39 / 0.030 14 / 31 / 0.021 55 / 116 / 0.075 19 / 39 / 0.024
    200000 x1 1 / 15 / 0.045 16 / 61 / 0.220 1 / 37 / 0.085 1 / 15 / 0.045
    x2 1 / 2 / 0.009 1 / 2 / 0.007 1 / 2 / 0.009 1 / 2 / 0.007
    x3 1 / 2 / 0.007 1 / 2 / 0.006 1 / 2 / 0.006 1 / 2 / 0.006
    x4 17 / 35 / 0.131 12 / 25 / 0.113 1 / 3 / 0.007 17 / 34 / 0.124
    x5 13 / 27 / 0.095 10 / 22 / 0.069 31 / 64 / 0.176 13 / 27 / 0.075
    x6 15 / 31 / 0.112 11 / 25 / 0.096 33 / 71 / 0.238 14 / 29 / 0.109
    x7 20 / 41 / 0.167 15 / 33 / 0.121 57 / 120 / 0.421 20 / 41 / 0.150
    x8 20 / 41 / 0.166 15 / 33 / 0.120 57 / 120 / 0.413 20 / 41 / 0.144
    TP3 50000 x1 2 / 6 / 0.005 14 / 31 / 0.022 1 / 6 / 0.007 2 / 6 / 0.007
    x2 1 / 4 / 0.005 1 / 3 / 0.003 7 / 18 / 0.012 1 / 4 / 0.004
    x3 1 / 3 / 0.002 1 / 3 / 0.002 7 / 15 / 0.010 1 / 3 / 0.002
    x4 16 / 33 / 0.022 12 / 25 / 0.017 6 / 13 / 0.007 16 / 32 / 0.017
    x5 13 / 27 / 0.015 9 / 19 / 0.009 20 / 41 / 0.018 13 / 27 / 0.011
    x6 14 / 29 / 0.016 10 / 21 / 0.011 37 / 75 / 0.035 14 / 29 / 0.013
    x7 18 / 37 / 0.024 13 / 27 / 0.015 47 / 95 / 0.059 18 / 37 / 0.025
    x8 18 / 37 / 0.024 13 / 27 / 0.014 47 / 95 / 0.059 18 / 37 / 0.021
    200000 x1 2 / 6 / 0.030 15 / 33 / 0.148 1 / 6 / 0.029 2 / 6 / 0.031
    x2 1 / 4 / 0.019 1 / 3 / 0.013 7 / 18 / 0.065 1 / 4 / 0.019
    x3 1 / 3 / 0.008 1 / 3 / 0.010 7 / 15 / 0.047 1 / 3 / 0.010
    x4 17 / 35 / 0.131 12 / 25 / 0.099 6 / 13 / 0.036 15 / 30 / 0.091
    x5 13 / 27 / 0.080 9 / 19 / 0.057 20 / 41 / 0.119 13 / 27 / 0.068
    x6 14 / 29 / 0.101 10 / 21 / 0.068 37 / 75 / 0.221 14 / 29 / 0.088
    x7 18 / 37 / 0.140 14 / 29 / 0.111 49 / 99 / 0.300 18 / 37 / 0.109
    x8 18 / 37 / 0.126 14 / 29 / 0.115 49 / 99 / 0.330 18 / 37 / 0.117
    TP4 50000 x1 14 / 56 / 0.044 27 / 131 / 0.107 6 / 48 / 0.032 14 / 56 / 0.046
    x2 14 / 56 / 0.045 27 / 132 / 0.082 6 / 44 / 0.031 14 / 56 / 0.041
    x3 13 / 53 / 0.038 27 / 134 / 0.093 5 / 41 / 0.021 13 / 53 / 0.035
    x4 14 / 57 / 0.040 21 / 103 / 0.069 5 / 40 / 0.023 14 / 57 / 0.034
    x5 23 / 93 / 0.062 29 / 144 / 0.100 45 / 360 / 0.197 19 / 77 / 0.050
    x6 23 / 93 / 0.055 29 / 144 / 0.105 52 / 416 / 0.233 20 / 81 / 0.049
    x7 30 / 120 / 0.090 30 / 148 / 0.093 70 / 556 / 0.317 23 / 92 / 0.056
    x8 30 / 120 / 0.077 30 / 148 / 0.099 70 / 556 / 0.336 23 / 92 / 0.067
    200000 x1 15 / 60 / 0.198 28 / 136 / 0.472 6 / 48 / 0.156 15 / 60 / 0.196
    x2 15 / 60 / 0.195 28 / 137 / 0.406 6 / 44 / 0.132 15 / 60 / 0.199
    x3 13 / 53 / 0.164 28 / 139 / 0.416 5 / 41 / 0.098 13 / 53 / 0.150
    x4 14 / 57 / 0.183 22 / 108 / 0.314 5 / 40 / 0.100 14 / 57 / 0.156
    x5 23 / 93 / 0.279 30 / 149 / 0.463 45 / 360 / 0.838 20 / 81 / 0.218
    x6 23 / 93 / 0.264 30 / 149 / 0.473 53 / 422 / 0.986 20 / 81 / 0.220
    x7 31 / 124 / 0.368 31 / 153 / 0.506 72 / 572 / 1.386 23 / 92 / 0.262
    x8 31 / 124 / 0.374 31 / 153 / 0.469 72 / 572 / 1.395 23 / 92 / 0.257

     | Show Table
    DownLoad: CSV
    Table 2.  The results of four algorithms for Test Problems 5, 6, 7, and 8.
    Problem Dim Sta Zhang Gao Abubakar Algorithm 2.1
    Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time
    TP5 50000 x1 9 / 45 / 0.197 27 / 157 / 0.758 9 / 106 / 0.478 9 / 56 / 0.215
    x2 9 / 24 / 0.040 21 / 85 / 0.111 9 / 47 / 0.066 9 / 24 / 0.040
    x3 5 / 20 / 0.022 14 / 67 / 0.079 6 / 51 / 0.045 5 / 20 / 0.025
    x4 4 / 17 / 0.021 13 / 64 / 0.067 5 / 46 / 0.045 4 / 17 / 0.020
    x5 33 / 142 / 0.098 12 / 59 / 0.041 48 / 419 / 0.261 11 / 45 / 0.028
    x6 34 / 147 / 0.135 14 / 72 / 0.068 48 / 436 / 0.362 13 / 52 / 0.058
    x7 38 / 167 / 0.190 17 / 87 / 0.095 fail / fail / fail 15 / 62 / 0.073
    x8 37 / 164 / 0.168 17 / 87 / 0.106 fail / fail / fail 15 / 62 / 0.063
    200000 x1 9 / 45 / 0.802 29 / 165 / 3.370 9 / 106 / 1.911 9 / 56 / 0.875
    x2 9 / 24 / 0.179 21 / 85 / 0.456 9 / 47 / 0.256 9 / 24 / 0.174
    x3 5 / 20 / 0.091 14 / 67 / 0.313 6 / 51 / 0.176 5 / 20 / 0.089
    x4 4 / 17 / 0.070 13 / 64 / 0.279 5 / 46 / 0.151 4 / 17 / 0.067
    x5 33 / 142 / 0.462 12 / 59 / 0.203 48 / 419 / 1.064 11 / 45 / 0.131
    x6 32 / 139 / 0.521 14 / 72 / 0.265 48 / 436 / 1.342 13 / 52 / 0.196
    x7 34 / 150 / 0.656 17 / 87 / 0.397 fail / fail / fail 15 / 62 / 0.266
    x8 43 / 174 / 0.859 17 / 87 / 0.451 fail / fail / fail 15 / 62 / 0.250
    TP6 50000 x1 41 / 269 / 0.102 32 / 287 / 0.103 69 / 1169 / 0.378 37 / 239 / 0.085
    x2 140 / 1211 / 0.420 320 / 4034 / 1.358 fail / fail / fail 144 / 1415 / 0.479
    x3 45 / 283 / 0.104 61 / 471 / 0.165 54 / 809 / 0.258 31 / 188 / 0.069
    x4 51 / 313 / 0.117 388 / 2730 / 1.005 177 / 2499 / 0.813 30 / 180 / 0.063
    x5 34 / 208 / 0.075 27 / 208 / 0.076 50 / 746 / 0.245 31 / 188 / 0.069
    x6 44 / 268 / 0.097 27 / 210 / 0.075 55 / 820 / 0.264 31 / 189 / 0.067
    x7 49 / 308 / 0.113 495 / 3482 / 1.295 68 / 1070 / 0.344 39 / 244 / 0.086
    x8 53 / 336 / 0.122 31 / 244 / 0.088 75 / 1193 / 0.374 34 / 216 / 0.077
    200000 x1 40 / 265 / 0.572 47 / 448 / 0.916 67 / 1123 / 2.037 36 / 235 / 0.477
    x2 180 / 1562 / 3.159 fail / fail / fail 50 / 903 / 1.714 221 / 2338 / 4.424
    x3 42 / 262 / 0.588 59 / 453 / 0.967 53 / 794 / 1.399 32 / 194 / 0.403
    x4 47 / 292 / 0.635 404 / 2835 / 6.352 99 / 1401 / 2.509 32 / 193 / 0.397
    x5 53 / 328 / 0.715 30 / 232 / 0.504 52 / 779 / 1.373 31 / 189 / 0.398
    x6 42 / 263 / 0.581 28 / 218 / 0.469 55 / 824 / 1.472 35 / 213 / 0.448
    x7 66 / 414 / 0.907 399 / 2816 / 6.304 75 / 1202 / 2.128 38 / 239 / 0.483
    x8 59 / 378 / 0.828 32 / 252 / 0.543 76 / 1211 / 2.237 37 / 234 / 0.508
    TP7 50000 x1 62 / 217 / 0.078 112 / 487 / 0.172 46 / 372 / 0.114 66 / 239 / 0.085
    x2 76 / 257 / 0.092 101 / 432 / 0.150 59 / 481 / 0.141 71 / 255 / 0.091
    x3 69 / 237 / 0.089 139 / 563 / 0.202 47 / 383 / 0.115 64 / 231 / 0.081
    x4 69 / 233 / 0.087 136 / 548 / 0.203 46 / 369 / 0.119 61 / 219 / 0.086
    x5 64 / 217 / 0.082 100 / 432 / 0.156 44 / 356 / 0.103 60 / 216 / 0.073
    x6 62 / 212 / 0.078 94 / 394 / 0.143 46 / 375 / 0.111 58 / 209 / 0.074
    x7 63 / 216 / 0.078 93 / 406 / 0.148 47 / 384 / 0.116 62 / 224 / 0.081
    x8 63 / 216 / 0.078 93 / 406 / 0.144 47 / 382 / 0.111 62 / 224 / 0.081
    200000 x1 61 / 212 / 0.520 114 / 501 / 1.400 42 / 344 / 0.617 69 / 248 / 0.565
    x2 76 / 262 / 0.668 101 / 444 / 1.087 60 / 490 / 0.908 70 / 252 / 0.557
    x3 70 / 243 / 0.618 102 / 445 / 1.098 53 / 427 / 0.835 65 / 234 / 0.559
    x4 71 / 241 / 0.672 97 / 421 / 1.077 46 / 370 / 0.692 65 / 233 / 0.511
    x5 68 / 233 / 0.619 92 / 399 / 0.966 46 / 368 / 0.664 62 / 223 / 0.486
    x6 56 / 194 / 0.508 127 / 513 / 1.289 44 / 355 / 0.644 59 / 213 / 0.472
    x7 63 / 219 / 0.551 95 / 417 / 1.014 45 / 363 / 0.666 66 / 237 / 0.559
    x8 63 / 219 / 0.530 95 / 417 / 1.010 45 / 363 / 0.669 66 / 237 / 0.538
    TP8 50000 x1 21 / 140 / 0.149 28 / 215 / 0.232 1 / 46 / 0.077 21 / 140 / 0.149
    x2 1 / 5 / 0.008 1 / 4 / 0.007 2 / 21 / 0.019 1 / 5 / 0.008
    x3 1 / 6 / 0.006 1 / 5 / 0.006 2 / 24 / 0.023 1 / 6 / 0.007
    x4 18 / 109 / 0.091 25 / 173 / 0.163 1 / 13 / 0.011 18 / 109 / 0.097
    x5 373 / 2929 / 1.437 26 / 90 / 0.053 fail / fail / fail 12 / 58 / 0.033
    x6 146 / 1338 / 1.183 8 / 36 / 0.035 fail / fail / fail 8 / 34 / 0.033
    x7 284 / 1640 / 1.546 48 / 222 / 0.150 65 / 936 / 0.759 56 / 299 / 0.273
    x8 300 / 1714 / 1.616 32 / 156 / 0.112 32 / 984 / 3.637 45 / 243 / 0.224
    200000 x1 22 / 146 / 0.601 29 / 222 / 1.012 1 / 46 / 0.308 22 / 146 / 0.596
    x2 1 / 5 / 0.033 1 / 4 / 0.028 2 / 21 / 0.089 1 / 5 / 0.032
    x3 1 / 6 / 0.037 1 / 5 / 0.026 2 / 24 / 0.099 1 / 6 / 0.029
    x4 19 / 115 / 0.385 25 / 173 / 0.650 1 / 13 / 0.047 19 / 115 / 0.373
    x5 373 / 2929 / 6.855 26 / 90 / 0.313 fail / fail / fail 12 / 58 / 0.166
    x6 fail / fail / fail 8 / 36 / 0.152 fail / fail / fail 8 / 34 / 0.155
    x7 318 / 1853 / 8.556 67 / 298 / 1.056 65 / 1010 / 4.233 62 / 334 / 1.378
    x8 305 / 2005 / 9.320 43 / 188 / 0.723 34 / 584 / 5.046 49 / 261 / 1.107

     | Show Table
    DownLoad: CSV
    Table 3.  The results of four algorithms for Test Problems 9 and 10.
    Problem Dim Sta Zhang Gao Abubakar Algorithm 2.1
    Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time Niter/Nfun/Time
    TP9 50000 x1 1 / 2 / 0.004 1 / 2 / 0.004 1 / 2 / 0.004 1 / 2 / 0.004
    x2 1 / 2 / 0.004 1 / 2 / 0.005 1 / 2 / 0.004 1 / 2 / 0.005
    x3 1 / 2 / 0.008 1 / 2 / 0.007 1 / 2 / 0.006 1 / 2 / 0.007
    x4 3 / 4 / 0.016 4 / 8 / 0.027 4 / 5 / 0.019 3 / 4 / 0.013
    x5 3 / 4 / 0.005 3 / 4 / 0.005 3 / 4 / 0.005 3 / 4 / 0.004
    x6 4 / 5 / 0.015 3 / 4 / 0.011 3 / 4 / 0.014 4 / 5 / 0.015
    x7 2 / 3 / 0.012 2 / 3 / 0.009 2 / 3 / 0.006 2 / 3 / 0.011
    x8 2 / 3 / 0.014 2 / 3 / 0.006 2 / 3 / 0.005 2 / 3 / 0.013
    200000 x1 1 / 2 / 0.017 1 / 2 / 0.017 1 / 2 / 0.017 1 / 2 / 0.017
    x2 1 / 2 / 0.019 1 / 2 / 0.018 1 / 2 / 0.017 1 / 2 / 0.018
    x3 1 / 2 / 0.025 1 / 2 / 0.024 1 / 2 / 0.024 1 / 2 / 0.028
    x4 3 / 4 / 0.072 4 / 8 / 0.087 4 / 5 / 0.069 3 / 4 / 0.039
    x5 3 / 4 / 0.022 3 / 4 / 0.023 3 / 4 / 0.023 3 / 4 / 0.021
    x6 4 / 5 / 0.062 3 / 4 / 0.053 3 / 4 / 0.049 4 / 5 / 0.063
    x7 2 / 3 / 0.053 2 / 3 / 0.028 2 / 3 / 0.029 2 / 3 / 0.044
    x8 2 / 3 / 0.050 2 / 3 / 0.034 2 / 3 / 0.028 2 / 3 / 0.051
    TP10 50000 x1 2 / 6 / 0.006 14 / 31 / 0.022 1 / 6 / 0.005 2 / 6 / 0.007
    x2 1 / 4 / 0.004 1 / 3 / 0.002 2 / 8 / 0.007 1 / 4 / 0.003
    x3 16 / 34 / 0.020 1 / 4 / 0.003 7 / 19 / 0.009 16 / 34 / 0.022
    x4 16 / 33 / 0.017 12 / 25 / 0.016 6 / 13 / 0.008 16 / 32 / 0.022
    x5 13 / 27 / 0.013 9 / 19 / 0.010 20 / 42 / 0.019 13 / 27 / 0.010
    x6 14 / 29 / 0.016 10 / 21 / 0.010 24 / 49 / 0.023 14 / 29 / 0.012
    x7 18 / 37 / 0.024 13 / 27 / 0.014 29 / 60 / 0.031 18 / 37 / 0.019
    x8 18 / 37 / 0.020 13 / 27 / 0.015 29 / 60 / 0.036 18 / 37 / 0.020
    200000 x1 2 / 6 / 0.024 15 / 33 / 0.129 1 / 6 / 0.017 2 / 6 / 0.025
    x2 1 / 4 / 0.013 1 / 3 / 0.011 2 / 8 / 0.030 1 / 4 / 0.015
    x3 17 / 36 / 0.127 1 / 4 / 0.013 7 / 19 / 0.061 13 / 27 / 0.097
    x4 17 / 35 / 0.128 12 / 25 / 0.098 6 / 13 / 0.038 15 / 30 / 0.093
    x5 13 / 27 / 0.087 9 / 19 / 0.067 20 / 42 / 0.117 13 / 27 / 0.074
    x6 14 / 29 / 0.103 10 / 21 / 0.074 24 / 49 / 0.170 14 / 29 / 0.093
    x7 18 / 37 / 0.141 14 / 29 / 0.103 31 / 64 / 0.191 18 / 37 / 0.112
    x8 18 / 37 / 0.130 14 / 29 / 0.107 31 / 64 / 0.210 18 / 37 / 0.118

     | Show Table
    DownLoad: CSV

    In an effort to comprehensively compare all algorithms, we utilize the performance profiles that were proposed by Dolan and Moré [26]. The performance profiles concerning the number of iterations are shown in Figure 1. The performance profiles for the number of function evaluations are displayed in Figure 2, while Figure 3 reveals the performance profiles of the CPU time. It is readily observable that the Algorithm 2.1 introduced in this paper outperforms the algorithms presented by Zhang, Gao, and Abubakar in terms of efficiency.

    Figure 1.  Performance profiles for the number of iterations.
    Figure 2.  Performance profiles for the number of function evaluations.
    Figure 3.  Performance profiles for the CPU time.

    In this section, we make use of Algorithm 2.1 to tackle sparse signal restoration problems and compare its performance with that of Zhang[19], Gao[23], and Abubakar[24]. We begin by recalling the compressed sensing model. This model is utilized to retrieve sparse solutions or signals from underdetermined linear systems represented as Ax=b, where bRm represents the data obtained from observations and ARm×n(mn) denotes a linear mapping. Specifically, we can achieve sparse signal restoration through the solution of the following l1-norm regularized optimization problem

    minxRn12Axb22+τx1, (5.1)

    where τ acts as a constant that reconciles data fitting and sparsity. The vector xRn can be decomposed as x=uv,u0,v0, where uRn,vRn, and ui=max{xi,0},vi=max{xi,0} for i=1,2,,n. Therefore, we can replace problem (5.1) with the next programming formulation:

    minu,v012A(uv)b22+τeTnu+τeTnv, (5.2)

    in which en=(1,1,,1)TRn.

    Further, we obtain its standard form

    minz012zTHz+cTz, (5.3)

    in which

    z=[uv], c=[τenATbτen+ATb], H=[ATAATAATAATA]. (5.4)

    Evidently, matrix H has semi-positive definiteness; thus, Eq (5.3) is a convex quadratic problem. Based on first-order optimality conditions for (5.3), z qualifies as a minimizer if and only if it satisfies the subsequent equations:

    F(z)=min{z,Hz+c}=0,z0, (5.5)

    in which F(z) is monotone and continuous. Henceforth, our proposed method may be employed for resolving (5.5).

    During these experiments, the primary objective is to reconstruct a one-dimensional sparse signal with a length of n from m(mn) observations. The performance of signal restoration is evaluated using the mean squared error (MSE), defined as

    MSE=1nxx2, (5.6)

    in which x represents the recovered signal and x indicates the initial signal. Let f(x)=12Axb22+τx1 serves as the auxiliary function. The iterative sequence commences with the initial measurement signal x0=ATb and comes to an end when

    f(xk)f(xk1)f(xk1)<105. (5.7)

    During our experiments, we define r as the quantity of nonzero entries in x. For a specified set of values of m,n, and r, we generate the subsequent test data using MATLAB:

    x=zeros(n,1);q=randperm(n);x(q(1:r))=randn(r,1);b=orth(randn(m,n))x+0.001randn(m,1);x0=Ab. (5.8)

    The numerical outcomes presented in Figure 4 contain the original sparse signal, the measurement data, and the reconstructed signals yielded by four algorithms. Clearly, all four algorithms reconstruct the original signal from the measurements with near perfection. However, Algorithm 2.1 outperforms the other algorithms in terms of MSE, the number of iterations, and CPU time. Figure 5 depicts four graphs to visually assess the performance of the four algorithms. These graphs display the convergence characteristics of the algorithms, depicting the changes in the values of the merit function and the MSE as the iterations progress and as the CPU time is consumed. Table 4 displays the numerical outcomes obtained from the experiment on ten diverse noise samples. Evidently, in a majority of cases, Algorithm 2.1 outperforms other algorithms. It achieves a lower MSE, demands a smaller number of iterations, and consumes less time on the CPU.

    Figure 4.  From top to bottom: the original signal, the measurement, and the reconstructed signals by four algorithms.
    Figure 5.  The upper graphs show MSE against iterations and CPU time, and the lower graphs show objective function value against iterations and CPU time.
    Table 4.  The results of four algorithms in ten experiments on the signal recovery problem.
    Zhang Gao Abubakar Algorithm 2.1
    Niter/Time/MSE Niter/Time/MSE Niter/Time/MSE Niter/Time/MSE
    87 / 4.47 / 2.07E-05 116 / 6.20 / 2.39E-05 85 / 4.80 / 1.26E-05 70 / 3.73 / 1.93E-05
    144 / 7.28 / 1.21E-05 150 / 7.81 / 4.06E-05 103 / 5.75 / 1.41E-05 89 / 4.78 / 1.20E-05
    120 / 6.18 / 1.26E-05 132 / 6.92 / 5.45E-05 112 / 6.21 / 1.61E-05 66 / 3.51 / 1.34E-05
    135 / 6.85 / 1.77E-05 124 / 6.47 / 9.44E-05 139 / 7.63 / 1.88E-05 95 / 4.99 / 1.05E-05
    146 / 7.40 / 1.72E-05 136 / 7.14 / 7.16E-05 115 / 6.32 / 2.32E-05 91 / 4.74 / 1.62E-05
    115 / 5.90 / 1.17E-05 127 / 6.62 / 4.50E-05 89 / 4.94 / 1.66E-05 70 / 3.66 / 1.75E-05
    153 / 7.77 / 1.25E-05 161 / 8.31 / 3.99E-05 157 / 8.51 / 1.49E-05 105 / 5.44 / 1.16E-05
    104 / 5.38 / 1.21E-05 113 / 6.10 / 4.62E-05 103 / 5.73 / 1.30E-05 82 / 4.42 / 1.15E-05
    132 / 6.71 / 1.32E-05 132 / 7.01 / 4.64E-05 95 / 5.34 / 1.34E-05 85 / 4.48 / 1.30E-05
    164 / 8.26 / 1.69E-05 160 / 8.22 / 7.25E-05 113 / 6.23 / 2.17E-05 108 / 5.58 / 1.74E-05
    Average 130 / 6.62 / 1.47E-05 135 / 7.08 / 5.35E-05 111 / 6.15 / 1.64E-05 86 / 4.53 / 1.42E-05

     | Show Table
    DownLoad: CSV

    This paper presents a derivative-free conjugate gradient method for the purpose of solving large-scale nonlinear systems of monotone equations that are bounded by convex constraints. The proposed approach is a novel expansion of the RMIL conjugate gradient method. It incorporates projection techniques, which play a crucial role in ensuring that the iterates remain within the feasible region defined by the convex constraints. This not only helps in maintaining the validity of the solution process but also improves the efficiency of the algorithm by avoiding unnecessary computations outside the constraint set. Additionally, the method features a line search strategy that operates without using derivatives. Under certain reasonable assumptions, we prove the global convergence property of the method. To further demonstrate the practical performance of our proposed method, we carry out extensive numerical experiments. The experimental results illustrate that the method we proposed has great promise. It outperforms several existing methods in terms of computational efficiency and accuracy when solving large-scale nonlinear systems of monotone equations. Furthermore, our numerical experiments on sparse signal restoration problems further validate the effectiveness and practicality of the method.

    This research was supported by the Zhejiang Provincial Natural Science Foundation of China under Grant No. LY23A010007 and the Huzhou Natural Science Foundation under Grant No. 2023YZ29.

    The author declares to have no competing interests.

    The author declares they have not used Artificial Intelligence (AI) tools in the creation of this article.



    [1] S. Prajna, P. A. Parrilo, A. Rantzer, Nonlinear control synthesis by convex optimization, IEEE Trans. Autom. Control., 49 (2004), 310–314. https://doi.org/10.1109/TAC.2003.823000 doi: 10.1109/TAC.2003.823000
    [2] Y. P. Hu, Y. J. Wang, An efficient projected gradient method for convex constrained monotone equations with applications in compressive sensing, J. Appl. Math. Phys., 8 (2020), 983–998. https://doi.org/10.4236/jamp.2020.86077 doi: 10.4236/jamp.2020.86077
    [3] J. K. Liu, L. Du, A gradient projection method for the sparse signal reconstruction in compressive sensing, Appl. Anal., 97 (2018), 2122–2131. https://doi.org/10.1080/00036811.2017.1359556 doi: 10.1080/00036811.2017.1359556
    [4] Y. H. Xiao, Q. Y. Wang, Q. J. Hu, Non-smooth equations based method for l1-norm problems with applications to compressive sensing, Nonlinear Anal. Theory Methods Appl., 74 (2011), 3570–3577. https://doi.org/10.1016/j.na.2011.02.040 doi: 10.1016/j.na.2011.02.040
    [5] B. Ghaddar, J. Marecek, M. Mevissen, Optimal power flow as a polynomial optimization problem, IEEE Trans. Power Syst., 31 (2016), 539–546. https://doi.org/10.1109/tpwrs.2015.2390037 doi: 10.1109/tpwrs.2015.2390037
    [6] G. Zhou, K. C. Toh, Superliner convergence of a Newton-type algorithm for monotone equations, J. Optim. Theory Appl., 125 (2005), 205–221. https://doi.org/10.1007/s10957-004-1721-7 doi: 10.1007/s10957-004-1721-7
    [7] X. W. Fang, Q. Ni, M. L. Zeng, A modified quasi-Newton method for nonlinear equations, J. Comput. Appl. Math., 328 (2018), 44–58. https://doi.org/10.1016/j.cam.2017.06.024 doi: 10.1016/j.cam.2017.06.024
    [8] D. H. Li, M. Fukushima, A globally and superlinearly convergent Gauss-Newton based BFGS method for symmetric equations, SIAM. J. Numer. Anal., 37 (1999), 152–172. https://doi.org/10.1137/s0036142998335704 doi: 10.1137/s0036142998335704
    [9] G. L. Yuan, Z. X. Wei, X. W. Lu, A BFGS trust-region method for nonlinear equations, Computing, 92 (2011), 317–333. https://doi.org/10.1007/s00607-011-0146-z doi: 10.1007/s00607-011-0146-z
    [10] M. V. Solodov, B. F. Svaiter, A globally convergent inexact Newton method for systems of monotone equations, In: M. Fukushima, L. Qi, Reformulation: nonsmooth, piecewise smooth, semismooth and smoothing methods, Springer, 22 (1998), 355–369. https://doi.org/10.1007/978-1-4757-6388-1_18
    [11] Q. Li, D. H. Li, A class of derivative-free methods for large-scale nonlinear monotone equations, IMA J. Numer. Anal., 31 (2011), 1625–1635. https://doi.org/10.1093/imanum/drq015 doi: 10.1093/imanum/drq015
    [12] W. Y. Cheng, A PRP type method for systems of monotone equations, Math. Comput. Model., 50 (2009), 15–20. https://doi.org/10.1016/j.mcm.2009.04.007 doi: 10.1016/j.mcm.2009.04.007
    [13] A. M. Awwal, P. Kumam, K. Sitthithakerngkiet, A. M. Bakoji, A. S. Halilu, I. M. Sulaiman, et al., Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application, AIMS Math., 6 (2021), 8792-8814. https://doi.org/10.3934/math.2021510 doi: 10.3934/math.2021510
    [14] M. Ahookhosh, K. Amini, S. Bahrami, Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations, Numer. Algorithms, 64 (2013), 21–42. https://doi.org/10.1007/s11075-012-9653-z doi: 10.1007/s11075-012-9653-z
    [15] M. Rivaie, M. Mamat, L. W. June, I. Mohd, A new class of nonlinear conjugate gradient coefficients with global convergence properties, App. Math. Comput., 218 (2012), 11323–11332. https://doi.org/10.1016/j.amc.2012.05.030 doi: 10.1016/j.amc.2012.05.030
    [16] X. W. Fang, Q. Ni, A new derivative-free conjugate gradient method for large-scale nonlinear systems of equations, B. Aust. Math. Soc., 95 (2017), 500–511. https://doi.org/10.1017/s0004972717000168 doi: 10.1017/s0004972717000168
    [17] X. W. Fang, A class of new derivative-free gradient type methods for large-scale nonlinear systems of monotone equations, J. Inequal. Appl., 2020 (2020), 1–13. https://doi.org/10.1186/s13660-020-02361-5 doi: 10.1186/s13660-020-02361-5
    [18] P. Liu, H. Shao, Y. Wang, X. Wu, A three-term CGPM-based algorithm without Lipschitz continuity for constrained nonlinear monotone equations with applications, Appl. Numer. Math., 175 (2022), 98–107. https://doi.org/10.1016/j.apnum.2022.02.001 doi: 10.1016/j.apnum.2022.02.001
    [19] N. Zhang, J. K. Liu, B. Tang, A three-term projection method based on spectral secant equation for nonlinear monotone equations, Jpn. J. Ind. Appl. Math., 41 (2024), 617–635. https://doi.org/10.1007/s13160-023-00624-4 doi: 10.1007/s13160-023-00624-4
    [20] M. Abdullahi, A. B. Abubakar, S. B. Salihu, Global convergence via modified self-adaptive approach for solving constrained monotone nonlinear equations with application to signal recovery problems, Rairo-oper. Res., 57 (2023), 2561–2584. https://doi.org/10.1051/ro/2023099 doi: 10.1051/ro/2023099
    [21] M. Abdullahi, A. B. Abubakar, K. Muangchoo, Modified three-term derivative-free projection method for solving nonlinear monotone equations with application, Numer. Algorithms, 95 (2024), 1459-1474. https://doi.org/10.1007/s11075-023-01616-8 doi: 10.1007/s11075-023-01616-8
    [22] S. Aji, A. M. Awwal, A. B. Muhammadu, C. Khunpanuk, N. Pakkaranang, B. Panyanak, A new spectral method with inertial technique for solving system of nonlinear monotone equations and applications, AIMS Math., 8 (2023), 4442-4466. https://doi.org/10.3934/math.2023221 doi: 10.3934/math.2023221
    [23] P. Gao, T. Wang, X. Liu, Y. Wu, An efficient three-term conjugate gradient-based algorithm involving spectral quotient for solving convex constrained monotone nonlinear equations with applications, Comput. Appl. Math., 41 (2022), 89. https://doi.org/10.1007/s40314-022-01796-4 doi: 10.1007/s40314-022-01796-4
    [24] A. B. Abubakar, P. Kumam, H. Mohammad, A. H. Ibrahim, PRP-like algorithm for monotone operator equations, Japan J. Indust. Appl. Math., 38 (2021), 805–822. https://doi.org/10.1007/s13160-021-00462-2 doi: 10.1007/s13160-021-00462-2
    [25] J. Sabiu, S. Sirisubtawee, An inertial Dai-Liao conjugate method for convex constrained monotone equations that avoids the direction of maximum magnification, J. Appl. Math. Comput., 70 (2024), 4319–4351. https://doi.org/10.1007/s12190-024-02123-2 doi: 10.1007/s12190-024-02123-2
    [26] E. D. Dolan, J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2001), 201–213. https://doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(164) PDF downloads(16) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog