Research article

An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration

  • Received: 14 July 2020 Accepted: 06 September 2020 Published: 10 October 2020
  • MSC : 65K05, 65L09, 90C30

  • Motivated by the projection technique, in this paper, we introduce a new method for approximating the solution of nonlinear equations with convex constraints. Under the assumption that the associated mapping is Lipchitz continuous and satisfies a weaker assumption of monotonicity, we establish the global convergence of the sequence generated by the proposed algorithm. Applications and numerical example are presented to illustrate the performance of the proposed method.

    Citation: Abdulkarim Hassan Ibrahim, Poom Kumam, Auwal Bala Abubakar, Umar Batsari Yusuf, Seifu Endris Yimer, Kazeem Olalekan Aremu. An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration[J]. AIMS Mathematics, 2021, 6(1): 235-260. doi: 10.3934/math.2021016

    Related Papers:

    [1] Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia . An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969. doi: 10.3934/math.2025365
    [2] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [3] Yan Xia, Xuejie Ma, and Dandan Li . An improved LS-RMIL-type conjugate gradient projection algorithm for systems of nonlinear equations and impulse noise image restoration. AIMS Mathematics, 2025, 10(6): 13640-13663. doi: 10.3934/math.2025614
    [4] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [5] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [6] Xiaowei Fang . A derivative-free RMIL conjugate gradient method for constrained nonlinear systems of monotone equations. AIMS Mathematics, 2025, 10(5): 11656-11675. doi: 10.3934/math.2025528
    [7] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [8] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [9] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [10] Shousheng Zhu . Double iterative algorithm for solving different constrained solutions of multivariate quadratic matrix equations. AIMS Mathematics, 2022, 7(2): 1845-1855. doi: 10.3934/math.2022106
  • Motivated by the projection technique, in this paper, we introduce a new method for approximating the solution of nonlinear equations with convex constraints. Under the assumption that the associated mapping is Lipchitz continuous and satisfies a weaker assumption of monotonicity, we establish the global convergence of the sequence generated by the proposed algorithm. Applications and numerical example are presented to illustrate the performance of the proposed method.


    Recently, using Solodov and Svaiter's projection technique [1], several conjugate gradient methods for solving large-scale unconstrained optimization problems have been extended to solve nonlinear equations with convex constraints (see, [2,3,4,5,6,7,8,9] and the references therein). Due to its simplicity, low storage requirement, and applications, the method has been of interest to various research communities [10,11,12,13,14]. As known, the Fletcher-Reeves (FR) [15], Conjugate Descent (CD) [16] and Dai-Yuan (DY) [17] conjugate gradient methods have strong convergence properties, but due to jamming, they do not do well in practice. Having said that, the Hestenes-Stiefel (HS) [18], Polak-Ribiére-Polyak (PRP) [19,20], and Liu-Storey (LS) [21] conjugate gradient methods do not necessarily converge, but they often work better than FR, CD and DY. In [22], in order to combine the numerical efficiency of the LS method and the strong convergence of the FR method, Djordjević proposed a hybrid LS-FR conjugate gradient method for solving the unconstrained optimization problem. In her work, the conjugate gradient parameter was computed as a convex combination of the LS and FR conjugate gradient parameter. The hybridization parameter for the convex combination was obtained in such a way that the direction of the proposed method satisfies the condition of the Newton direction but also at the same time, it satisfies the famous Dai-Liao conjugacy condition.

    In an attempt to extend the LS-FR method of Djordjević to solve monotone nonlinear equations with convex constraints, Ibrahim et al. [23] proposed a derivative-free hybrid LS-FR conjugate gradient method with a conjugate gradient parameter computed as a convex combination of derivative-free LS and FR conjugate gradient parameter. The hybridization parameter of the convex combination in their work was obtained to satisfy the famous conjugacy condition. Numerical results show that the method is efficient for solving nonlinear monotone equations with convex constraints. It is noteworthy to state that, several conditions were imposed on the hybridization parameter used in [23] in order for the hybridization parameter to take values within the interval (0,1).

    Our motivation is the following: Can we extend the LS-FR method proposed by Djordjević to construct an efficient hybrid gradient-free projection algorithm where the hybridization parameter has no condition imposed on it and the hybridization parameter will always take values in the interval [0,1])? In this paper, we give a positive answer to this question. The remainder of the paper is organized as follows. In Section 2, we describe the algorithm and some properties. In Section 3, we analyze the global convergence of the method. Numerical example and application are presented in Section 4 and 5 respectively.

    Consider the following unconstrained optimization problem

    minimizeg(z),zRn, (2.1)

    where g:RnR is a continuously differentiable function whose gradient at zk is denoted by f(zk):=(zk). Given any starting point z0Rn, the algorithm in [22] is to generate a sequence of approximation {zk} to the minimum z of g, in which

    zk+1=zk+tkjk,k0, (2.2)

    where tk>0 is the steplength which is computed by a certain line search and jk is the search direction defined by

    jk={f(zk)+βkjk1if k>0,f(zk)if k=0, (2.3)

    with βk defined by

    βk=(1θk)f(zk)Tyk1f(zk1)Tjk1+θkf(zk)2f(zk1)2,yk1=f(zk)f(zk1). (2.4)

    where θk is a hybridization parameter chosen to satisfy the Dai-Liao's condition, that is, {for t>0,}

    jTkyk1=tsTk1f(zk),

    where sk1=zk+1zk.

    Motivated by (2.3) and (2.4), we propose a gradient free projection algorithm for solving the following nonlinear equation with convex constraints:

    ρ(z)=0,zΩ (2.5)

    where ΩRn is a nonempty closed convex set, and ρ:RnRn is a continuous mapping. Our propose gradient-free projection iterative method first generates a trial point say {ck} using the relation:

    ck=zk+tkjk,tk>0, (2.6)

    the search direction jk is computed by

    jk={ρ(zk)if k=0,πkρ(zk)+βkwk1if k>0, (2.7)

    where βk is computed

    βk:=(1θk)ρ(zk)Tyk1ρ(zk1)Tjk1+θkρ(zk)2ρ(zk1)2,θk:=yk12yTk1wk1,wk1:=wk1+(max{0,wTk1yk1yk12}+1)yk1,yk1:=ρ(zk)ρ(zk1),wk1:=ck1zk1,

    and πk is obtained to satisfy the descent condition, that is, for α>0,

    jTkρ(zk)αρ(zk)2. (2.8)

    For k=0, (2.8) obviously holds. For kN, we have

    ρ(zk)Tjk(πkβkρ(zk)Twk1ρ(zk1)2)ρ(zk)2. (2.9)

    To satisfy (2.8), we only need that

    πkl+βkρ(zk)Twk1ρ(zk1)2,l>0. (2.10)

    In this paper, we choose πk as

    πk=l+βkρ(zk)Twk1ρ(zk1)2. (2.11)

    It is important to note that, θk has the following property:

    yTk1wk1max{yTk1wk1,yk12}yk12>0.

    Thus,

    θk=yk12yTk1wk1(0,1),k.

    The definition of wk1 is from the ideas of Li and Fukushima [24,25]. The definition of θk was originally proposed by Birgin and Martinez [26] and similar idea can be found in [27,28] and other optimization literature. The proposed algorithm is described immediately after recalling the definition of the projection operator.

    Definition 2.1. Let ΩRn be a nonempty closed convex set. Then for any xRn, its projection onto Ω, denoted by PΩ[x], is defined by

    PΩ[x]:=argmin{xy :yΩ}.

    The projection operator PΩ has a well-known property, that is, for any x,yRn the following nonexpansive property hold

    PΩ(x)PΩ(y)xy,x,yRn. (2.12)

    Algorithm 1:
    Input. Choose an initial point z0Ω, Initialize the variables: τ(0,1),η(0,2) Tol>0, κ>0,l>0. Set k=0.
    Step 0. Compute ρ(zk). If ρ(zk)Tol, stop. Otherwise, compute jk by (2.7)
    Step 1. Determine the steplength tk=max{τm|m=0,1,2,} such that
    ρ(zk+τmjk)Tjkκτmjk2.    (2.13)
    Step 2. Compute the trial point ck=zk+tkjk.
    Step 3. If ckΩ and ρ(ck)Tol, stop. Otherwise, compute
    zk+1=PΩ[zkημkρ(ck)]  (2.14)
    where
    μk=ρ(ck)T(zkck)ρ(ck)2.
    Step 4. Set k:=k+1 and go to step 1.

    In what follows, we assume that ρ satisfies the following assumptions.

    Assumption 1. The solution set Ω is nonempty.

    Assumption 2. The mapping ρ is Lipschitz continuous on Rn. That is,

    ρ(x)ρ(y)Lxy,x,yRn.

    Assumption 3. For any yΩ and xRn, it holds that

    ρ(x)T(xy)0. (3.1)

    Lemma 3.1. Suppose that Assumption 1 holds. Then there exists a step-size tk satisfying the line search (2.13) for k0.

    Proof. Assume there exist k00 such that (2.13) fails to hold for any i0, that is

    ρ(zk0+τijk0),jk0<κτijk02,i1.

    Applying the continuity property of ρ and letting i yields

    ρ(zk0)Tjk00,

    which negates (2.8). Hence proved.

    Lemma 3.2. Suppose Assumption 1-3 is satisfied and the sequences {zk,ck,tk,jk} are generated by Algorithm 1. Then

    tkmin{1,τ(L+κ)ρ(zk)2jk2}.

    Proof. Note that from (2.13), if tk1, then ˉtk=τ1tk does not satisfy (2.13), that is,

    ρ(zk+τ1tkjk)Tjk<κτ1tkjk2. (3.2)

    Combining the above inequality with the descent condition (2.8), we have

    ρ(zk)2=ρ(zk)Tjk=(ρ(zk+τ1tk)ρ(zk))Tjkρ(zk+τ1tk)Tjkτ1tkLjk2+τ1tkκjk2=τ1tk(L+κ)jk2. (3.3)

    Since ρ satisfies Assumption 2 then, (3.3) holds. Thus, from (3.3),

    tkmin{1,τ(L+κ)ρ(zk)2jk2}. (3.4)

    This proves Lemma 3.2.

    Lemma 3.3. Suppose that Assumptions 1-3 hold and let {zk} and {ck} be the sequences generated by Algorithm 1. Then, ρ(ck) is an ascent direction of the function zz2 at the point zk, where zΩ.

    Proof. At zk, the function 12xz2 has a gradient of zkz. By the weakly monotonicity property (3.1), it can be seen that

    ρ(ck)T(zkz)=ρ(ck)T(zk+ckckz)=ρ(ck)T(ckz)+ρ(ck)T(zkck)=ρ(ck)T(zkck)κt2kjk2=κzkck2>0. (3.5)

    The inequality above, i.e., (3.5) points out that ρ(ck) is a descent direction of the function zz at the iteration point zk.

    Lemma 3.4. Let Assumption 1-3 hold and the sequence {zk} be generated by Algorithm 1. Suppose that z is a solution of problem (2.5) with ρ(z)=0. Then there exists a positive δ>0 such that

    ρ(zk)δ. (3.6)

    Proof. Remember, by using the well-known property of PΩ, we can deduce that for any zΩ,

    zk+1z2=PΩ[zkημkρ(ck)]z2zkημkρ(ck)z2=zkz2ημkρ(ck)T(zkz)+η2μ2kρ(ck)2=zkz2ηρ(ck)T(zkck)ρ(ck)2ρ(ck)T(zkz)+η2(ρ(ck)T(zkck)ρ(ck))2zkz2ηρ(ck)T(zkck)ρ(ck)2ρ(ck)T(zkck)+η2(ρ(ck)T(zkck)ρ(ck))2=zkz2η(2η)(ρ(ck)T(zkck)ρ(ck))2 (3.7)
    zkz2. (3.8)

    From inequality (3.8) we see that {zkz} is a decreasing sequence and hence {zk} is bounded. That is,

    zka0,a0>0. (3.9)

    Furthermore, we obtain

    zk+1zzkzzk1zz0z. (3.10)

    Using the Lipchitz continuity of ρ, we have

    ρ(zk)=ρ(zk)ρ(z)LzkzLz0z. (3.11)

    Setting δ=Lz0z proves Lemma 3.4.

    Lemma 3.5. Suppose Assumption 1-3 hold and the sequence {zk} and {ck} are generated by Algorithm 1. Then,

    (a) {ck} is bounded

    (b) limkzkck=0

    (c) limkzkzk+1=0.

    Proof. (a) From (3.10), we know that the sequence {zk} is bounded. So by (3.5), we have

    ρ(ck)T(zkck)κzkck2. (3.12)

    By (3.1) and (3.6) we have

    ρ(ck)T(zkck)=(ρ(ck)ρ(zk))T(zkck)+ρ(zk)T(zkck)ρ(zk)zkckδzkck.

    Combined with (3.12), it is easy to deduce that

    zkckδκ.

    Then, we obtain,

    ckδκ+zk

    Thus {ck} is bounded due to {zk} boundedness.

    (b) From inequality (3.7), we get

    zk+1zzkz2η(2η)[ρ(ck)T(zkck)]2ρ(ck)2zkz2η(2η)κ2zkck4ρ(ck)2,

    which means

    η(2η)zkck4ρ(ck)2κ2(zkz2zk+1z2).

    Since the mapping ρ is continuous, and the {ck} is bounded, we know that {ρ(ck)} is bounded. Therefore a positive δ1>0 exists, such that ρ(ck)δ1 and moreover

    η(2η)k=0zkck4δ21κ2k=0(zkz2zkz2)=δ21κ2z0z2<+.

    Hence,

    limktkjk=limkzkck=0. (3.13)

    Using the property of the projection operator, i.e., (2.12), we have

    zkzk+1=zkPΩ[zkημkρ(ck)]zk(zkημkρ(ck))=ημkρ(ck)ηzkck.

    The global convergence result for Algorithm 1 is established via the following theorem.

    Theorem 3.6. Suppose Assumption 1-3 is satisfied and the sequences {zk} are generated by the Algorithm 1. Then we

    lim infkρ(zk)=0. (3.14)

    Proof. Suppose (3.14) does not hold, meaning there exist a constant ε0>0 such that

    ρ(zk)ε0k0. (3.15)

    By (2.8), we know

    ρ(zk)jkρ(zk)Tjkαρ(zk)2,

    which implies

    jkαρ(zk)ε0,k0. (3.16)

    By (2.3), we have

    jk=πkρ(zk)+βkwk1=(c+βkρ(zk)Twk1ρ(zk1)2)ρ(zk)+((1θk)ρ(zk)Tyk1ρ(zk1)Tjk1+θkρ(zk)2ρ(zk1)2)wk1lρ(zk)+|βk|wk1+(ρ(zk)|ρ(zk1)Tjk1|yk1+ρ(zk)2ρ(zk1)2)wk1lρ(zk)+2(ρ(zk)|ρ(zk1)Tjk1|yk1+ρ(zk)2ρ(zk1)2)wk1lρ(zk)+2(ρ(zk)αρ(zk1)2tk1jk1+ρ(zk)2ρ(zk1)2)tk1jk1lδ+2δε20(tk1jk1)2+2δ2ε20tk1jk1

    for all kN. Since (3.13) holds, it follows that for every ε1>0 there exist k0 such that tk1jk1<ε1 for every k>k0. Choosing ε1=ε0 and 0=max{j0,j1,,jk0,01} where 01=δ(c+2+2δ/ε0), it holds that

    jk0 (3.17)

    for every kN. Integrating with (3.4),(3.15),(3.16) and (3.17), we know that for any k sufficiently large

    tkjkmin{1,τ(L+κ)ρ(zk)2jk2}jk=min{jk,τ(L+κ)ρ(zk)2jk}min{ε0,τε20(L+κ)0}

    The last inequality yields a contradiction with (b) in Lemma 3.5. Consequently, (3.14) holds. The proof is completed.

    The Dolan and Moré performance profile [29] is used in this section to evaluate the efficiency of the proposed algorithm on a set of test problems with varying dimensions and initial points. Comparison is made with algorithm of the same class proposed in [30]. All codes were written in MATLAB environment and compiled on a HP laptop (CPU Corei3-2.5 GHz, RAM 8 GB) with Windows 10 operating system.

    Algo.1: The new method (Algorithm 1).

    Algo.2: MFRM method proposed in [30].

    The parameters for Algo.1 are chosen as: τ=0.9,κ=104,η=1.2. While parameters for Algo.2 are set as reported in [30]. All iterative procedure are terminated whenever ρ(zk)<106. The experiment is carried out on nine different problems with dimensions ranging from n=1000,5000,10,000,50,000,100,000 using seven different initial points: z1=(0.1,,0.1)T,z2=(0.2,,0.2)T,z3=(0.5,,0.5)T,z4=(1.2,,1.2)T,z5=(1.5,,1.5)T,z6=(2,,2)T and z7=rand(n,1). The test problems considered are listed the below where the mapping ρ(z)=(ρ1(z),ρ2(z),,ρn(z))T

    Problem 1 [31] Exponential Function.

    ρ1(z)=ez11,ρi(z)=ezi+zi1,for i=2,3,...,n,and Ω=Rn+.

    Problem 2 [31] Modified Logarithmic Function.

    ρi(z)=ln(zi+1)zin,for i=1,2,3,...,n,and Ω={zRn:ni=1zin,zi>1,i=1,2,,n}.

    Problem 3 [32]

    ρi(z)=min(min(|zi|,z2i),max(|zi|,z3i))for i=2,3,...,n,and Ω=Rn+.

    Problem 4 [31] Strictly Convex Function I.

    ρi(z)=ezi1,for i=1,2,...,n,and Ω=Rn+.

    Problem 5 [31] Strictly Convex Function II.

    ρi(z)=inezi1,for i=1,2,...,n,and Ω=Rn+.

    Problem 6 [33] Tridiagonal Exponential Function.

    ρ1(z)=z1ecos(h(z1+z2)),ρi(z)=ziecos(h(zi1+zi+zi+1)),for i=2,...,n1,ρn(z)=znecos(h(zn1+zn)),h=1n+1

    Problem 7 [34] Nonsmooth Function.

    ρi(z)=zisin|zi1|,i=1,2,3,...,n,and Ω={zRn:ni=1zin,zi1,i=1,2,,n}.

    Problem 8 [31] The Trig exp function

    ρ1(z)=3z31+2z25+sin(z1z2)sin(z1+z2)ρi(z)=3z3i+2zi+15+sin(zizi+1)sin(zi+zi+1)+4zizi1ezi1zi3fori=2,3,...,n1ρn(z)=zn1ezn1zn4zn3,where h=1m+1 and  Ω=Rn+..

    Problem 9 [35]

    ti=ni=1z2i,c=105ρi(z)=2c(zi1)+4(ti0.25)zi,i=1,2,3,...,n.and Ω=Rn+.

    Figures 1-3 presents the results of the comparisons of the mentioned methods. Figure 1 shows the graph of the two methods where the performance measure is the total number of iterations. In the figure, we see that the Algo.1 obtain the most wins with the probability around 78 % and the Algo.2 method is in the second place. Figure 2 shows the performance of the considered methods relative to the total number of function evaluation. Graph of this measure shows that Algo.1 has better performance in comparison with Algo.2. In Figure 3 the performance measure is the CPU running time. The CPU running time figure also indicates that Algo.1 outperforms Algo.2. From the presented figures, it is clear that Algo.1 is the most efficient in solving the considered test problems. A detailed result of the numerical experiment for the test problems is reported in Table 2-10 in the appendix section.

    Figure 1.  Performance profiles for the number of iterations.
    Figure 2.  Performance profiles for the number of function evaluations.
    Figure 3.  Performance profiles for the CPU time.

    The restoration of images is a process in which a distorted or damaged image is restored to its original form. Having an algorithm that can perform such function with high restoration efficiency is of importance. We consider the signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) as a metric for measuring the restoration efficiency. SNR, PSNR and SSIM's larger values reflect better quality of the restored images and indicate that the restored images are closer to the original. Consider the following disturbed or incomplete observation

    b=ρz+ω, (5.1)

    where zRn,bRk is the observation data, ρRk×n(k<<n) is a linear operator and ωRk is an error term. Our goal in this section is to recover the unknown vector z. A well-known approach for obtaining z is by solving the following 1-regularization problem

    minzRn{σz1+12ρzb22} (5.2)

    where the regularization term σ is positive, 1, and 2 are the 1-norm and 2-norm respectively. See (Refs. [36,37,38,39,40]) for various algorithms for solving (5.2). For a comprehensive procedure on how to use our proposed algorithm to solve (5.2), see [41,42].

    To assess the efficiency of Algo.1 in restoring the images degraded using a Gaussian blur kernel of standard deviation 0.1, we compare its performance with the modified Fletcher-Reeves conjugate Gradient method proposed in [30]. The algorithm is referred to as Algo.2. Four test images with different sizes are considered in this experiment. The images are labelled as A, B, C and D. The algorithms are implemented based on the following

    ● All codes were written and implemented in Matlab environment.

    ● Same starting point and stopping condition (with Tol=105) for all the algorithms.

    ● Parameters for Algo.1, are chosen as η=1,τ=0.55,κ=104. Parameters for Algo.2 are chosen as reported in the application section of [30].

    ● The linear operator ρ in the experiment is choosen as the Gaussian matrix generated by the command rand(k,n) in MATLAB.

    ● The signal-to-noise ratio (SNR) is defined as

    SNR:=20×log10(z˜zz),

    where ˜z is recovered vector. The definition of the peak-to-signal and the structural similarity index (SSIM) ratio (PSNR) can be found in [43] and [44], respectively.

    Table 1.  The numerical results obtained by Algo.1 and Algo.2 methods in restoring the blurred and noisy images.
    Algo.1 Algo.2
    Test Image SNR PSNR SSIM SNR PSNR SSIM
    A 16.74 19.03 0.765 16.66 18.95 0.760
    B 16.65 21.98 0.911 16.59 21.93 0.910
    C 20.93 22.76 0.913 20.87 22.70 0.912
    D 18.80 21.71 0.931 18.68 21.58 0.929

     | Show Table
    DownLoad: CSV

    Figure 4 has four columns labelled ORI, BNI, RA1 and RA2. Images on the column labelled ORI are the original images, images on the column labelled BNI are the blurred and noisy images. RA1 are the images restored by Algo.1 and RA2 are images restored by Algo.2. Table 1 provides the SNR, PSNR and SSIM values for Algo.1 and Algo.2. It can be seen that Algo.1 has the highest SNR, PSNR and SSIM in all the images used for the experiment. This indicates that Algo.1 is more effective than Algo.2 in restoring blurred and noisy images.

    Figure 4.  From the left: The original, blurred and noisy images, restored images by Algo.1 and 2.

    "The authors acknowledge the support provided by the Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation Cluster (CLASSIC), Faculty of Science, KMUTT. The first author was supported by the Petchra Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant No.16/2561)."

    The authors declare that they have no conflict of interest.

    Table 2.  Numerical result for Problem 1.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 3 11 0.020026 0 32 128 0.15285 5.77E-07
    z2 2 7 0.022233 0 23 92 0.046757 1.03E-07
    z3 3 11 0.028924 0.00E+00 43 172 0.085067 3.24E-07
    z4 2 7 0.01272 0.00E+00 28 112 0.042162 8.50E-07
    z5 2 7 0.012594 0 38 152 0.082875 7.44E-07
    z6 2 7 0.006795 0.00E+00 34 136 0.061034 4.36E-07
    z7 29 116 0.098046 3.71E-08 62 248 0.1162 3.84E-07
    5000 z1 2 7 0.1681 0 16 64 0.097823 4.98E-07
    z2 2 7 0.07673 0 27 108 0.16998 5.89E-08
    z3 2 7 0.019262 0.00E+00 34 136 0.43167 8.96E-07
    z4 2 7 0.041163 0.00E+00 43 172 0.24401 4.77E-07
    z5 2 7 0.035437 0.00E+00 36 144 0.28409 4.72E-07
    z6 2 7 0.031377 0 25 100 0.1577 8.50E-07
    z7 68 272 1.5225 2.22E-08 NaN NaN NaN NaN
    10000 z1 2 7 0.067548 0 7 28 0.080462 7.04E-07
    z2 2 7 0.02502 0 24 96 0.9236 2.84E-07
    z3 2 7 0.037267 0.00E+00 21 84 0.82591 6.94E-07
    z4 2 7 0.027143 0 38 152 0.97602 5.16E-07
    z5 2 7 0.070627 0 28 112 0.46927 8.68E-07
    z6 2 7 0.052355 0 25 100 0.28425 8.52E-07
    z7 107 428 9.536 3.42E-08 NaN NaN NaN NaN
    50000 z1 2 7 0.35904 0 7 28 0.29707 2.32E-07
    z2 2 7 0.24819 0 15 60 1.212 2.10E-07
    z3 2 7 0.21212 0.00E+00 7 28 0.33872 7.76E-07
    z4 2 7 0.265 0.00E+00 24 96 1.2315 7.36E-07
    z5 2 7 0.22679 0.00E+00 21 84 0.98662 9.19E-07
    z6 2 7 0.46048 0.00E+00 8 32 0.44742 4.62E-07
    z7 353 1412 85.8011 1.12E-11 NaN NaN NaN NaN
    100000 z1 2 7 0.26127 0 7 28 0.66487 2.45E-07
    z2 2 7 0.42916 0 14 56 1.9555 4.72E-07
    z3 2 7 0.29924 0.00E+00 7 28 0.65463 8.36E-07
    z4 2 7 0.47753 0 28 112 4.5812 5.94E-07
    z5 2 7 0.28228 0.00E+00 17 68 2.0596 5.23E-07
    z6 2 7 0.45284 0.00E+00 8 32 1.4187 3.26E-07
    z7 NaN NaN NaN NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical result for Problem 2.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 7 22 0.047242 1.58E-09 4 12 0.075993 5.17E-07
    z2 7 22 0.011612 2.12E-09 5 15 0.01685 6.04E-09
    z3 6 19 0.008748 7.52E-09 5 15 0.009081 4.37E-07
    z4 8 25 0.008643 1.95E-09 6 18 0.009114 1.52E-07
    z5 6 19 0.010119 8.43E-09 7 21 0.013185 1.10E-09
    z6 9 28 0.009592 1.04E-09 7 21 0.014685 1.74E-08
    z7 44 169 0.043234 9.47E-07 69 261 0.22456 6.30E-07
    5000 z1 6 20 0.062266 2.97E-07 4 12 0.012773 1.75E-07
    z2 6 20 0.031005 4.05E-07 5 15 0.019072 6.27E-10
    z3 6 19 0.022469 9.12E-10 5 15 0.03412 1.42E-07
    z4 7 23 0.048441 3.74E-07 6 18 0.040398 3.94E-08
    z5 6 19 0.032782 1.42E-09 6 18 0.030696 4.05E-07
    z6 7 22 0.038421 7.12E-09 7 21 0.02232 2.36E-09
    z7 45 169 0.32315 1.74E-07 75 290 0.68505 9.20E-07
    10000 z1 5 16 0.065175 9.23E-09 4 12 0.05281 1.21E-07
    z2 6 21 0.072794 3.06E-07 5 15 0.055137 2.79E-10
    z3 6 19 0.036537 4.32E-10 5 15 0.038347 9.73E-08
    z4 7 24 0.054625 2.82E-07 6 18 0.057504 2.56E-08
    z5 6 20 0.09281 7.38E-10 6 18 0.053546 2.93E-07
    z6 7 22 0.098951 4.21E-09 7 21 0.05207 1.24E-09
    z7 34 133 0.35652 8.45E-07 75 286 1.1715 8.81E-07
    50000 z1 7 26 1.0892 1.84E-07 4 12 0.072347 6.32E-08
    z2 9 34 0.57121 3.87E-07 5 16 0.17135 6.75E-11
    z3 6 21 0.17777 5.88E-07 5 15 0.30908 4.87E-08
    z4 10 37 0.79714 3.60E-07 6 18 0.30538 1.11E-08
    z5 7 25 0.14544 1.16E-07 6 18 0.17986 1.84E-07
    z6 8 28 0.24313 7.93E-07 7 21 0.11731 4.01E-10
    z7 36 141 1.1389 1.07E-07 87 326 3.3093 3.83E-07
    100000 z1 7 26 0.35609 2.56E-07 4 12 0.23409 5.40E-08
    z2 9 34 0.43666 5.47E-07 5 16 0.3152 4.27E-11
    z3 6 21 0.31721 7.65E-07 5 15 0.28597 4.05E-08
    z4 10 37 0.53074 5.09E-07 6 18 0.23003 8.15E-09
    z5 7 25 0.27827 1.55E-07 6 18 0.45582 1.80E-07
    z6 9 32 0.5333 1.09E-07 7 22 0.2709 2.71E-10
    z7 31 121 1.7511 5.10E-07 81 306 6.1345 9.16E-07

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical result for Problem 3.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 2 6 0.007199 0 2 6 0.026849 0
    z2 2 6 0.00552 0 2 6 0.003173 0
    z3 2 6 0.006377 0 2 6 0.006714 0
    z4 3 11 0.017561 0.00E+00 2 6 0.005403 0
    z5 3 11 0.007556 0.00E+00 2 6 0.009761 0
    z6 3 11 0.008376 0 2 6 0.003285 0
    z7 16 49 0.043387 2.91E-07 2 6 0.005238 0
    5000 z1 2 6 0.024798 0 2 6 0.037672 0
    z2 2 6 0.017882 0 2 6 0.016857 0
    z3 2 6 0.014761 0 2 6 0.016971 0
    z4 3 11 0.021926 0.00E+00 2 6 0.024599 0
    z5 3 11 0.019501 0.00E+00 2 6 0.12878 0
    z6 3 11 0.099645 0 2 6 0.016172 0
    z7 21 65 0.26663 8.91E-07 2 6 0.068901 0
    10000 z1 2 6 0.053329 0 2 6 0.039629 0
    z2 2 6 0.036889 0 2 6 0.029941 0
    z3 2 6 0.02419 0 2 6 0.022097 0
    z4 3 11 0.046062 0.00E+00 2 6 0.015668 0
    z5 3 11 0.17699 0.00E+00 2 6 0.1442 0
    z6 3 11 0.056058 0 2 6 0.080865 0
    z7 19 58 0.42057 1.22E-07 2 6 0.052839 0
    50000 z1 2 6 0.11901 0 2 6 0.27419 0
    z2 2 6 0.10804 0 2 6 0.228 0
    z3 2 6 0.15799 0 2 6 0.083129 0
    z4 3 11 0.27797 0.00E+00 2 6 0.09131 0
    z5 3 11 0.21594 0.00E+00 2 6 0.047357 0
    z6 3 11 0.16137 0 2 6 0.049002 0
    z7 21 64 1.156 3.21E-07 2 6 0.12806 0
    100000 z1 2 6 0.21976 0 2 6 0.15418 0
    z2 2 6 0.19397 0 2 6 0.44568 0
    z3 2 6 0.17969 0 2 6 0.79033 0
    z4 3 11 0.30701 0.00E+00 2 6 0.20222 0
    z5 3 11 0.72994 0.00E+00 2 6 0.20959 0
    z6 3 11 0.36806 0 2 6 0.26684 0
    z7 22 67 1.8809 2.86E-07 2 6 0.23472 0

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical result for Problem 4.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 2 7 0.007686 0 8 31 0.025113 1.65E-07
    z2 2 7 0.004973 0 7 28 0.007628 2.32E-07
    z3 2 7 0.004693 0.00E+00 8 32 0.009827 7.42E-07
    z4 2 7 0.005652 0.00E+00 9 35 0.012267 1.62E-07
    z5 2 7 0.007206 0.00E+00 7 28 0.012782 3.92E-07
    z6 2 7 0.005871 0.00E+00 8 32 0.016455 3.68E-07
    z7 22 87 0.030189 0.00E+00 71 284 0.045157 1.91E-07
    5000 z1 2 7 0.01789 0 8 31 0.035804 3.68E-07
    z2 2 7 0.083644 0 7 28 0.056219 5.20E-07
    z3 2 7 0.019787 0.00E+00 9 36 0.028182 1.66E-07
    z4 2 7 0.02077 0 9 35 0.028652 3.61E-07
    z5 2 7 0.023139 0 7 28 0.09901 8.76E-07
    z6 2 7 0.045152 0 8 32 0.046074 8.22E-07
    z7 77 308 0.88375 2.85E-07 51 204 0.12808 9.55E-07
    10000 z1 2 7 0.025792 0 8 32 0.043945 5.20E-07
    z2 2 7 0.020051 0 7 27 0.050306 7.35E-07
    z3 2 7 0.025936 0.00E+00 9 36 0.039643 2.35E-07
    z4 2 7 0.03822 0 9 35 0.041378 5.11E-07
    z5 2 7 0.03849 0 8 32 0.13231 1.24E-07
    z6 2 7 0.031354 0.00E+00 NaN NaN NaN NaN
    z7 101 404 3.4918 4.06E-09 NaN NaN NaN NaN
    50000 z1 2 7 0.091176 0 9 34 0.23565 0
    z2 2 7 0.090561 0 NaN NaN NaN NaN
    z3 2 7 0.13857 0.00E+00 9 35 0.12604 5.25E-07
    z4 2 7 0.10731 0.00E+00 10 38 0.426 0
    z5 2 7 0.14284 0.00E+00 8 31 0.47179 2.77E-07
    z6 2 7 0.29418 0.00E+00 9 35 0.21126 2.60E-07
    z7 110 439 8.6871 0 44 176 1.2526 3.55E-07
    100000 z1 2 7 0.20371 0 9 36 0.2659 1.65E-07
    z2 2 7 0.26727 0 8 30 0.48604 0
    z3 2 7 0.1588 0.00E+00 9 35 0.35032 7.42E-07
    z4 2 7 0.20624 0.00E+00 10 39 0.34301 1.62E-07
    z5 2 7 0.19404 0.00E+00 NaN NaN NaN NaN
    z6 2 7 0.21718 0.00E+00 9 35 0.31142 3.68E-07
    z7 111 444 18.0039 6.11E-08 NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical result for Problem 5.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 34 127 0.026467 1.62E-07 71 263 0.16103 3.21E-07
    z2 36 140 0.026972 7.43E-07 62 235 0.052281 1.13E-07
    z3 52 205 0.16096 2.58E-07 50 194 0.088863 3.72E-07
    z4 96 378 0.55079 4.35E-07 NaN NaN NaN NaN
    z5 123 492 0.48012 3.96E-07 NaN NaN NaN NaN
    z6 196 784 1.0967 6.21E-07 NaN NaN NaN NaN
    z7 115 459 0.34961 2.89E-07 NaN NaN NaN NaN
    5000 z1 59 232 0.68163 2.32E-07 63 231 0.30231 3.90E-07
    z2 50 188 0.29441 6.42E-07 72 282 0.18091 7.31E-07
    z3 179 709 2.2218 2.91E-07 60 232 0.14861 1.47E-07
    z4 171 684 2.9204 2.99E-07 NaN NaN NaN NaN
    z5 297 1187 5.9983 3.31E-07 NaN NaN NaN NaN
    z6 420 1680 9.6236 1.67E-07 NaN NaN NaN NaN
    z7 187 744 3.4767 8.43E-07 NaN NaN NaN NaN
    10000 z1 77 300 1.3784 1.39E-07 75 283 0.27114 2.35E-07
    z2 74 283 1.5399 1.34E-07 55 208 0.20873 3.12E-07
    z3 214 843 5.0625 9.68E-07 67 259 0.65684 2.52E-07
    z4 253 1012 8.4598 5.48E-07 NaN NaN NaN NaN
    z5 383 1531 15.2491 1.45E-07 NaN NaN NaN NaN
    z6 575 2300 24.956 4.27E-07 NaN NaN NaN NaN
    z7 323 1291 9.6152 2.90E-07 NaN NaN NaN NaN
    50000 z1 135 534 12.3192 9.85E-07 65 253 1.9331 1.74E-07
    z2 342 1357 46.7469 1.53E-07 94 369 4.3154 4.77E-07
    z3 326 1294 39.8986 4.97E-07 NaN NaN NaN NaN
    z4 504 2016 82.9841 3.45E-07 NaN NaN NaN NaN
    z5 NaN NaN NaN NaN NaN NaN NaN NaN
    z6 NaN NaN NaN NaN NaN NaN NaN NaN
    z7 602 2403 97.0953 6.65E-07 NaN NaN NaN NaN
    100000 z1 164 645 25.8558 1.87E-07 NaN NaN NaN NaN
    z2 NaN NaN NaN NaN NaN NaN NaN NaN
    z3 400 1590 126.0758 7.38E-07 NaN NaN NaN NaN
    z4 636 2544 240.5206 3.57E-07 NaN NaN NaN NaN
    z5 NaN NaN NaN NaN NaN NaN NaN NaN
    z6 NaN NaN NaN NaN NaN NaN NaN NaN
    z7 NaN NaN NaN NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 7.  Numerical result for Problem 6.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 9 36 0.17935 8.25E-07 9 36 0.0153 8.24E-07
    z2 9 36 0.03051 7.93E-07 9 36 0.048509 7.93E-07
    z3 9 36 0.027967 6.99E-07 9 36 0.017521 6.98E-07
    z4 9 36 0.015472 4.79E-07 9 36 0.014811 4.78E-07
    z5 9 36 0.007122 3.84E-07 9 36 0.016431 3.83E-07
    z6 9 36 0.010164 2.27E-07 9 36 0.009737 2.26E-07
    z7 9 36 0.020191 7.23E-07 9 36 0.017515 7.06E-07
    5000 z1 10 40 0.048118 1.85E-07 10 40 0.082844 1.85E-07
    z2 10 40 0.097072 1.78E-07 10 40 0.050343 1.78E-07
    z3 10 40 0.032297 1.57E-07 10 40 0.10792 1.57E-07
    z4 10 40 0.043942 1.07E-07 10 40 0.076199 1.07E-07
    z5 9 36 0.043841 8.61E-07 9 36 0.037916 8.61E-07
    z6 9 36 0.033263 5.08E-07 9 36 0.069375 5.08E-07
    z7 10 40 0.037194 1.58E-07 10 40 0.047672 1.58E-07
    10000 z1 10 40 0.082406 2.62E-07 10 40 0.076648 2.62E-07
    z2 10 40 0.068947 2.52E-07 10 40 0.15678 2.52E-07
    z3 10 40 0.058721 2.22E-07 10 40 0.13597 2.22E-07
    z4 10 40 0.078257 1.52E-07 10 40 0.08399 1.52E-07
    z5 10 40 0.062069 1.22E-07 10 40 0.07822 1.22E-07
    z6 9 36 0.053275 7.18E-07 9 36 0.1205 7.18E-07
    z7 10 40 0.057688 2.24E-07 10 40 0.080168 2.23E-07
    50000 z1 10 40 0.22352 5.85E-07 10 39 0.38243 5.85E-07
    z2 10 40 0.27436 5.63E-07 10 39 0.41361 5.63E-07
    z3 10 40 0.23122 4.96E-07 10 39 0.30721 4.96E-07
    z4 10 40 0.21192 3.40E-07 10 39 0.43086 3.40E-07
    z5 10 40 0.23892 2.72E-07 10 38 0.29829 1.26E-15
    z6 10 40 0.29017 1.61E-07 10 38 0.51415 6.28E-16
    z7 10 40 0.25616 5.01E-07 10 39 0.29756 5.00E-07
    100000 z1 10 40 0.82944 8.28E-07 10 39 1.1183 8.28E-07
    z2 10 40 0.47168 7.96E-07 10 38 0.6117 6.28E-16
    z3 10 40 0.49749 7.01E-07 10 38 0.81145 6.28E-16
    z4 10 40 0.52125 4.80E-07 10 38 0.79886 0
    z5 10 40 0.69499 3.85E-07 10 38 0.60219 0
    z6 10 40 0.47656 2.27E-07 10 38 0.72864 0
    z7 10 40 0.49578 7.07E-07 10 39 0.80741 7.07E-07

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical result for Problem 7.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 5 20 0.046544 3.24E-07 5 20 0.008647 3.24E-07
    z2 5 20 0.009418 1.43E-07 5 20 0.013849 1.43E-07
    z3 5 20 0.038932 1.68E-08 4 16 0.015388 5.81E-08
    z4 6 24 0.01166 9.16E-09 6 24 0.010967 3.39E-08
    z5 6 24 0.010929 1.23E-08 6 24 0.0107 4.99E-08
    z6 6 23 0.01937 1.04E-07 6 23 0.014025 6.55E-08
    z7 26 104 0.032336 8.36E-09 36 144 0.12555 5.34E-08
    5000 z1 5 20 0.029586 7.25E-07 5 20 0.041107 7.25E-07
    z2 5 20 0.027892 3.20E-07 5 20 0.038182 3.20E-07
    z3 5 20 0.035333 3.75E-08 4 16 0.12123 1.30E-07
    z4 6 24 0.032225 2.05E-08 6 24 0.05211 7.58E-08
    z5 6 24 0.026546 2.75E-08 6 24 0.038675 1.12E-07
    z6 6 23 0.032529 2.32E-07 6 23 0.081384 1.46E-07
    z7 34 136 0.24122 3.59E-08 41 164 0.29917 1.14E-07
    10000 z1 6 24 0.043879 5.12E-09 6 24 0.079589 5.12E-09
    z2 5 20 0.036531 4.52E-07 5 20 0.1417 4.52E-07
    z3 5 20 0.050902 5.31E-08 4 16 0.045901 1.84E-07
    z4 6 24 0.054078 2.90E-08 6 24 0.059807 1.07E-07
    z5 6 24 0.052048 3.89E-08 6 24 0.099213 1.58E-07
    z6 6 23 0.04894 3.28E-07 6 23 0.054029 2.07E-07
    z7 41 164 0.30793 3.45E-08 45 180 0.92444 3.64E-07
    50000 z1 6 24 0.14 1.15E-08 6 24 0.26027 1.15E-08
    z2 6 24 0.14343 5.06E-09 6 24 0.60276 5.06E-09
    z3 5 20 0.15201 1.19E-07 4 16 0.17247 4.11E-07
    z4 6 24 0.34794 6.48E-08 6 24 0.22999 2.40E-07
    z5 6 24 0.15433 8.70E-08 6 24 0.38048 3.53E-07
    z6 6 23 0.1425 7.35E-07 6 23 0.2271 4.63E-07
    z7 29 116 1.295 9.41E-09 44 176 2.2436 7.06E-07
    100000 z1 6 24 0.39791 1.62E-08 6 24 0.94834 1.62E-08
    z2 6 24 0.47548 7.15E-09 6 24 0.43453 7.15E-09
    z3 5 20 0.48174 1.68E-07 4 16 0.29517 5.81E-07
    z4 6 24 0.26721 9.16E-08 6 24 0.55119 3.39E-07
    z5 6 24 0.28512 1.23E-07 6 24 0.61073 4.99E-07
    z6 7 27 0.5385 5.19E-09 6 23 0.42035 6.55E-07
    z7 29 116 1.6021 1.19E-08 41 164 3.9953 9.23E-07

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical result for Problem 8.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 66 264 0.934 3.48E-07 NaN NaN NaN NaN
    z2 101 404 0.99428 4.28E-07 41 164 0.75214 4.29E-07
    z3 40 160 0.40127 3.33E-07 NaN NaN NaN NaN
    z4 39 156 0.5071 5.07E-07 39 156 0.71207 3.83E-07
    z5 36 144 0.61923 4.69E-07 35 140 1.4123 4.07E-07
    z6 4 14 0.071864 NaN 4 14 0.059454 NaN
    z7 23 89 0.48051 NaN NaN NaN NaN NaN
    5000 z1 52 208 2.7649 2.91E-07 NaN NaN NaN NaN
    z2 44 176 2.1027 3.54E-07 NaN NaN NaN NaN
    z3 42 168 2.1325 2.95E-07 NaN NaN NaN NaN
    z4 37 148 2.0738 3.41E-07 NaN NaN NaN NaN
    z5 16 60 0.64982 NaN NaN NaN NaN NaN
    z6 20 76 0.98188 NaN NaN NaN NaN NaN
    z7 301 1202 18.7543 4.37E-07 NaN NaN NaN NaN
    10000 z1 77 303 9.6495 3.64E-07 NaN NaN NaN NaN
    z2 71 284 8.0859 3.74E-07 NaN NaN NaN NaN
    z3 62 248 7.1755 3.27E-07 NaN NaN NaN NaN
    z4 48 192 4.1575 4.42E-07 NaN NaN NaN NaN
    z5 15 55 0.93456 NaN NaN NaN NaN NaN
    z6 123 490 12.4072 3.88E-07 NaN NaN NaN NaN
    z7 307 1226 35.579 3.46E-07 NaN NaN NaN NaN
    50000 z1 24 89 8.5017 NaN NaN NaN NaN NaN
    z2 89 355 45.0395 4.34E-07 NaN NaN NaN NaN
    z3 65 260 28.4752 3.57E-07 NaN NaN NaN NaN
    z4 431 1718 135.7493 3.88E-07 NaN NaN NaN NaN
    z5 6 21 2.1067 NaN NaN NaN NaN NaN
    z6 6 21 1.8349 NaN NaN NaN NaN NaN
    z7 7 24 1.8872 NaN NaN NaN NaN NaN
    100000 z1 34 130 31.5135 NaN NaN NaN NaN NaN
    z2 5 17 1.9076 NaN NaN NaN NaN NaN
    z3 87 332 64.5816 3.00E-07 NaN NaN NaN NaN
    z4 76 303 68.3533 4.49E-07 NaN NaN NaN NaN
    z5 5 17 2.2305 NaN NaN NaN NaN NaN
    z6 5 17 2.5293 NaN NaN NaN NaN NaN
    z7 6 21 3.1078 NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical result for Problem 9.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 10 34 0.032445 1.06E-07 10 34 0.005932 1.06E-07
    z2 10 34 0.008239 1.06E-07 10 34 0.010335 1.06E-07
    z3 10 34 0.0073 1.06E-07 10 34 0.008407 1.06E-07
    z4 10 34 0.008495 1.06E-07 10 34 0.010614 1.06E-07
    z5 10 34 0.00772 1.06E-07 10 34 0.00775 1.06E-07
    z6 10 34 0.011383 1.06E-07 10 35 0.009311 1.06E-07
    z7 67 213 0.024778 9.71E-07 10 34 0.008864 1.06E-07
    5000 z1 7 25 0.022534 6.89E-08 7 25 0.027033 6.89E-08
    z2 7 25 0.032305 6.89E-08 7 25 0.02838 6.89E-08
    z3 7 25 0.026468 6.89E-08 7 25 0.068469 6.89E-08
    z4 7 25 0.034453 6.89E-08 7 26 0.037886 6.89E-08
    z5 7 25 0.021703 6.89E-08 7 26 0.037186 6.89E-08
    z6 7 25 0.027352 6.89E-08 7 26 0.077955 6.89E-08
    z7 20 66 0.061189 9.72E-07 7 25 0.037992 6.89E-08
    10000 z1 6 22 0.07498 8.13E-08 6 22 0.054682 8.13E-08
    z2 6 22 0.047478 8.13E-08 6 22 0.21797 8.13E-08
    z3 6 22 0.052347 8.13E-08 6 22 0.081579 8.13E-08
    z4 6 22 0.047644 8.13E-08 6 23 0.085064 8.13E-08
    z5 6 22 0.068304 8.13E-08 6 23 0.19028 8.13E-08
    z6 6 22 0.042771 8.13E-08 6 23 0.15365 8.13E-08
    z7 12 41 0.074071 9.08E-07 6 22 0.056989 8.13E-08
    50000 z1 5 19 0.22112 1.41E-07 5 19 0.60662 1.41E-07
    z2 5 19 0.21638 1.41E-07 5 19 0.33244 1.41E-07
    z3 5 19 0.22186 1.41E-07 5 20 0.8389 1.41E-07
    z4 5 19 0.37207 1.41E-07 5 20 0.63139 1.41E-07
    z5 5 19 0.36107 1.41E-07 5 20 1.046 1.41E-07
    z6 5 19 0.27063 1.41E-07 5 20 1.4673 1.41E-07
    z7 59 235 2.7862 4.11E-07 5 19 0.57234 1.41E-07
    100000 z1 6 23 0.93893 2.10E-07 6 23 1.3525 2.10E-07
    z2 6 23 0.60445 2.10E-07 6 24 1.5313 2.10E-07
    z3 6 23 0.71683 2.10E-07 6 24 1.6022 2.10E-07
    z4 6 23 0.57114 2.10E-07 6 24 1.7882 2.10E-07
    z5 6 23 0.57099 2.10E-07 6 24 1.878 2.10E-07
    z6 6 23 0.69104 2.10E-07 6 24 1.9634 2.10E-07
    z7 34 135 4.3899 4.52E-07 6 23 1.4688 2.10E-07

     | Show Table
    DownLoad: CSV


    [1] M. V. Solodov, B. F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim., 37 (1999), 765-776. doi: 10.1137/S0363012997317475
    [2] Y. Xiao, H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310-319. doi: 10.1016/j.jmaa.2013.04.017
    [3] J. Liu, S. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl., 70 (2015), 2442-2453. doi: 10.1016/j.camwa.2015.09.014
    [4] J. Liu, Y. Feng, A derivative-free iterative method for nonlinear monotone equations with convex constraints, Numer. Algorithms, 82 (2019) 245-262.
    [5] A. H. Ibrahim, A. I. Garba, H. Usman, J. Abubakar, A. B. Abubakar, Derivative-free projection algorithm for nonlinear equations with convex constraints, Thai Journal of Mathematics, 18 (2020), 212-232.
    [6] P. Kaelo, M. Koorapetse, A globally convergent projection method for a system of nonlinear monotone equations, Int. J. Comput. Math., 2020.
    [7] A. B. Abubakar, J. Rilwan, S. E. Yimer, A. H. Ibrahim, I. Ahmed, Spectral three-term conjugate descent method for solving nonlinear monotone equations with convex constraints, Thai Journal of Mathematics, 18 (2020) 501-517.
    [8] A. H. Ibrahim, P. Kumam, W. Kumam, A Family of Derivative-Free Conjugate Gradient Methods for Constrained Nonlinear Equations and Image Restoration, IEEE Access, 8 (2020), 162714- 162729. doi: 10.1109/ACCESS.2020.3020969
    [9] A. B. Abubakar, A. H. Ibrahim, A. B. Muhammad, C. Tammer, A modified descent Dai-Yuan conjugate gradient method for constraint nonlinear monotone operator equations, Applied Analysis and Optimization, 4 (2020), 1-24.
    [10] K. Meintjes, A. P. Morgan, A methodology for solving chemical equilibrium systems, Appl. Math. Comput., 22 (1987), 333-361.
    [11] K. Meintjes, A. P. Morgan, Chemical equilibrium systems as numerical test problems, ACM T. Math. Software, 16 (1990), 143-151. doi: 10.1145/78928.78930
    [12] M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, R. J. Plemmons, Algorithms and applications for approximate nonnegative matrix factorization, Comput. stat. data an., 52 (2007), 155-173. doi: 10.1016/j.csda.2006.11.006
    [13] S. P. Dirkse, M. C. Ferris, Mcplib: A collection of nonlinear mixed complementarity problems, Optim. method. softw., 5 (1995), 319-345. doi: 10.1080/10556789508805619
    [14] J. E. Dennis, J. J. Moré, A characterization of superlinear convergence and its application to quasi-Newton methods, Math. comput., 28 (1974), 549-560.
    [15] R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149
    [16] R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, 2013.
    [17] Y. H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. optimiz., 10 (1999), 177-182. doi: 10.1137/S1052623497318992
    [18] M. R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of research of the National Bureau of Standards, 49 (1952), 409-436. doi: 10.6028/jres.049.044
    [19] E. Polak, G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, ESAIM-Math. Model. Num., 3 (1969), 35-43.
    [20] B. T. Polyak, The conjugate gradient method in extremal problems, U. S. S. R. Comput. Math. Math. Phys., 9 (1969), 94-112. doi: 10.1016/0041-5553(69)90035-4
    [21] Y. Liu, C. Storey, Efficient generalized conjugate gradient algorithms, part 1: theory, J. optimiz. theory app., 69 (1991), 129-137.
    [22] S. S. Djordjevi?, New Hybrid Conjugate Gradient Method As A Convex Combination of Ls and Fr Methods, Acta Math. Sci., 39 (2019), 214-228.
    [23] A. H. Ibrahim, P. Kumam, A. B. Abubakar, W. Jirakitpuwapat, J. Abubakar, A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing, Heliyon, 6 (2020), 1-17.
    [24] D. H. Li, M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35. doi: 10.1016/S0377-0427(00)00540-9
    [25] D. H. Li, M. Fukushima, On the global convergence of the BFGS method for nonconvex unconstrained optimization problems, SIAM J. Optimiz., 11 (2001), 1054-1064. doi: 10.1137/S1052623499354242
    [26] E. G. Birgin, J. M. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. opt., 43 (2001), 117-128. doi: 10.1007/s00245-001-0003-0
    [27] G. Yuan, T. Li, W. Hu, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, Appl. Numer. Math., 147 (2020), 129-141. doi: 10.1016/j.apnum.2019.08.022
    [28] D. H. Li, X. L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations, Numerical Algebra, Control & Optimization, 1 (2011), 71-82.
    [29] E. D. Dolan, J. J. Moré, Benchmarking optimization software with performance profiles, Math. program., 91 (2002), 201-213.
    [30] A. B. Abubakar, P. Kumam, H. Mohammad, A. M. Awwal, K, Sitthithakerngkiet, A modified fletcher-reeves conjugate gradient method for monotone nonlinear equations with some applications, Mathematics, 7 (2019), 1-25.
    [31] W. La Cruz, J. Martínez, M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations, Math. Comput., 75 (2006), 1429-1448. doi: 10.1090/S0025-5718-06-01840-0
    [32] W. La Cruz, A spectral algorithm for large-scale systems of nonlinear monotone equations, Numer. Algorithms, 76 (2017), 1109-1130.
    [33] Y. Bing, G. Lin, An efficient implementation of Merrill's method for sparse or partially separable systems of nonlinear equations, SIAM J. Optimiz., 1 (1991), 206-221. doi: 10.1137/0801015
    [34] Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, Z. Li, Spectral gradient projection method for monotone nonlinear equations with convex constraints, Appl. Numer. Math., 59 (2009), 2416-2423. doi: 10.1016/j.apnum.2009.04.004
    [35] Y. Ding, Y. Xiao, J. Li, A class of conjugate gradient methods for convex constrained monotone equations, Optimization, 66 (2017), 2309-2328. doi: 10.1080/02331934.2017.1372438
    [36] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413-1457.
    [37] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202. doi: 10.1137/080716542
    [38] E. T. Hale, W. Yin, Y. Zhang, A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing, CAAM TR07-07, Rice University, 2007, 1-45.
    [39] S. Huang, Z. Wan, A new nonmonotone spectral residual method for nonsmooth nonlinear equations, J. Comput. Appl. Math., 313 (2017), 82-101. doi: 10.1016/j.cam.2016.09.014
    [40] J. Abubakar, P. Kumam, A. H. Ibrahim, A. Padcharoen, Relaxed Inertial Tseng's Type Method for Solving the Inclusion Problem with Application to Image Restoration, Mathematics, 8 (2020), 1-19.
    [41] A. H. Ibrahim, P. Kumam, A. B. Abubakar, J. Abubakar, A. B. Muhammad, Least-Square-Based Three-Term Conjugate Gradient Projection Method for?1-Norm Problems with Application to Compressed Sensing, Mathematics, 8 (2020), 1-21. doi: 10.3390/math8101793
    [42] A. B. Abubakar, K. MUangchoo, A. Muhammad, A. H. Ibrahim, A Spectral Gradient Projection Method for Sparse Signal Reconstruction in Compressive Sensing, Modern Applied Science, 14 (2020), 86-93.
    [43] A. C. Bovik, Handbook of Image and Video Processing, Academic press, 2010.
    [44] S. M. Lajevardi, Structural similarity classifier for facial expression recognition, Signal Image Video P., 8 (2014), 1103-1110. doi: 10.1007/s11760-014-0639-2
  • This article has been cited by:

    1. Auwal Bala Abubakar, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Jamilu Abubakar, Sadiya Ali Rano, FR-type algorithm for finding approximate solutions to nonlinear monotone operator equations , 2021, 2193-5343, 10.1007/s40065-021-00313-5
    2. Auwal Bala Abubakar, Jamilu Sabi’u, Poom Kumam, Abdullah Shah, Solving nonlinear monotone operator equations via modified SR1 update, 2021, 1598-5865, 10.1007/s12190-020-01461-1
    3. Auwal Bala Abubakar, Poom Kumam, Hassan Mohammad, Abdulkarim Hassan Ibrahim, PRP-like algorithm for monotone operator equations, 2021, 0916-7005, 10.1007/s13160-021-00462-2
    4. Abdulkarim Hassan Ibrahim, Poom Kumam, Re-modified derivative-free iterative method for nonlinear monotone equations with convex constraints, 2021, 20904479, 10.1016/j.asej.2020.11.009
    5. Auwal Bala Abubakar, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Sunday Emmanuel Fadugba, Kazeem Olalekan Aremu, Lateef Olakunle Jolaoso, Jen-Chih Yao, A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations, 2021, 2021, 2314-4785, 1, 10.1155/2021/5549878
    6. Abdulkarim Hassan Ibrahim, Poom Kumam, Auwal Bala Abubakar, Jamilu Abubakar, A derivative‐free projection method for nonlinear equations with non‐Lipschitz operator: Application to LASSO problem, 2023, 0170-4214, 10.1002/mma.9033
    7. Abdulkarim Hassan Ibrahim, Jitsupa Deepho, Auwal Bala Abubakar, Ahmad Kamandi, A globally convergent derivative-free projection algorithm for signal processing, 2022, 25, 0972-0502, 2301, 10.1080/09720502.2021.1960000
    8. Auwal Bala Abubakar, Poom Kumam, Abdulkarim Hassan Ibrahim, Inertial Derivative-Free Projection Method for Nonlinear Monotone Operator Equations With Convex Constraints, 2021, 9, 2169-3536, 92157, 10.1109/ACCESS.2021.3091906
    9. Auwal Bala Abubakar, Poom Kumam, Abdulkarim Hassan Ibrahim, Parin Chaipunya, Sadiya Ali Rano, New hybrid three-term spectral-conjugate gradient method for finding solutions of nonlinear monotone operator equations with applications, 2022, 201, 03784754, 670, 10.1016/j.matcom.2021.07.005
    10. Poom Kumam, Auwal Bala Abubakar, Abdulkarim Hassan Ibrahim, Hamza Umar Kura, Bancha Panyanak, Nuttapol Pakkaranang, Another hybrid approach for solving monotone operator equations and application to signal processing, 2022, 45, 0170-4214, 7897, 10.1002/mma.8285
    11. Abdulkarim Hassan Ibrahim, Poom Kumam, Min Sun, Parin Chaipunya, Auwal Bala Abubakar, Projection method with inertial step for nonlinear equations: Application to signal recovery, 2023, 19, 1547-5816, 30, 10.3934/jimo.2021173
    12. Abdulkarim Hassan Ibrahim, Poom Kumam, Auwal Bala Abubakar, Abubakar Adamu, Accelerated derivative‐free method for nonlinear monotone equations with an application, 2022, 29, 1070-5325, 10.1002/nla.2424
    13. Abdulkarim Hassan Ibrahim, Morteza Kimiaei, Poom Kumam, A new black box method for monotone nonlinear equations, 2021, 0233-1934, 1, 10.1080/02331934.2021.2002326
    14. Auwal Bala Abubakar, Poom Kumam, Hassan Mohammad, Abdulkarim Hassan Ibrahim, Aliyu Ibrahim Kiri, A hybrid approach for finding approximate solutions to constrained nonlinear monotone operator equations with applications, 2022, 177, 01689274, 79, 10.1016/j.apnum.2022.03.001
    15. Abdulkarim Hassan Ibrahim, Jitsupa Deepho, Auwal Bala Abubakar, Kazeem Olalekan Aremu, A modified Liu-Storey-Conjugate descent hybrid projection method for convex constrained nonlinear equations and image restoration, 2022, 12, 2155-3289, 569, 10.3934/naco.2021022
    16. Auwal Bala Abubakar, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Abubakar Bakoji Muhammad, Lateef Olakunle Jolaoso, Kazeem Olalekan Aremu, A New Three-Term Hestenes-Stiefel Type Method for Nonlinear Monotone Operator Equations and Image Restoration, 2021, 9, 2169-3536, 18262, 10.1109/ACCESS.2021.3053141
    17. Abdulkarim Ibrahim, Poom Kumam, Auwal Abubakar, Jamilu Abubakar, Jewaidu Rilwan, Guash Taddele, Derivative-free MLSCD conjugate gradient method for sparse signal and image reconstruction in compressive sensing, 2022, 36, 0354-5180, 2011, 10.2298/FIL2206011I
    18. Abdulkarim Hassan Ibrahim, Poom Kumam, Basim A. Hassan, Auwal Bala Abubakar, Jamilu Abubakar, A derivative-free three-term Hestenes–Stiefel type method for constrained nonlinear equations and image restoration, 2022, 99, 0020-7160, 1041, 10.1080/00207160.2021.1946043
    19. Abdulkarim Hassan Ibrahim, Jitsupa Deepho, Auwal Bala Abubakar, Abubakar Adamu, A three-term Polak-Ribière-Polyak derivative-free method and its application to image restoration, 2021, 13, 24682276, e00880, 10.1016/j.sciaf.2021.e00880
    20. A.B. Abubakar, P. Kumam, H. Mohammad, A.H. Ibrahim, T. Seangwattana, B.A. Hassan, A hybrid BFGS-Like method for monotone operator equations with applications, 2024, 446, 03770427, 115857, 10.1016/j.cam.2024.115857
    21. Jiayun Rao, Chaozhi Yu, Na Huang, Stabilized BB projection algorithm for large-scale convex constrained nonlinear monotone equations to signal and image processing problems, 2024, 448, 03770427, 115916, 10.1016/j.cam.2024.115916
    22. A. B. Abubakar, A. H. Ibrahim, M. Abdullahi, M. Aphane, Jiawei Chen, A sufficient descent LS-PRP-BFGS-like method for solving nonlinear monotone equations with application to image restoration, 2024, 96, 1017-1398, 1423, 10.1007/s11075-023-01673-z
    23. Eltiyeb Ali, Salem Mahdi, A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration, 2023, 15, 2073-8994, 1203, 10.3390/sym15061203
    24. Ghulam Abbass, Nek Muhammad Katbar, Israr Ahmed Memon, Haibo Chen, Fikadu Tesgera Tolasa, Gemeda Tolessa Lubo, Numerical optimization of large-scale monotone equations using the free-derivative spectral conjugate gradient method, 2025, 11, 2297-4687, 10.3389/fams.2025.1477774
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4879) PDF downloads(143) Cited by(24)

Figures and Tables

Figures(4)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog