Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Research article

On the distribution of k-full lattice points in Z2

  • Let Z2 be the two-dimensional integer lattice. For an integer k2, we say a non-zero lattice point in Z2 is k-full if the greatest common divisor of its coordinates is a k-full number. In this paper, we first prove that the density of k-full lattice points in Z2 is ck=p(1p2+p2k), where the product runs over all primes. Then we show that the density of k-full lattice points on a path of an α-random walk in Z2 is almost surely ck, which is independent on α.

    Citation: Shunqi Ma. On the distribution of k-full lattice points in Z2[J]. AIMS Mathematics, 2022, 7(6): 10596-10608. doi: 10.3934/math.2022591

    Related Papers:

    [1] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [2] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [3] Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718
    [4] Aliyu Muhammed Awwal, Poom Kumam, Kanokwan Sitthithakerngkiet, Abubakar Muhammad Bakoji, Abubakar S. Halilu, Ibrahim M. Sulaiman . Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application. AIMS Mathematics, 2021, 6(8): 8792-8814. doi: 10.3934/math.2021510
    [5] Jamilu Sabi'u, Ali Althobaiti, Saad Althobaiti, Soubhagya Kumar Sahoo, Thongchai Botmart . A scaled Polak-Ribiˊere-Polyak conjugate gradient algorithm for constrained nonlinear systems and motion control. AIMS Mathematics, 2023, 8(2): 4843-4861. doi: 10.3934/math.2023241
    [6] Xiaowei Fang . A derivative-free RMIL conjugate gradient method for constrained nonlinear systems of monotone equations. AIMS Mathematics, 2025, 10(5): 11656-11675. doi: 10.3934/math.2025528
    [7] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [8] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [9] Wei Xue, Pengcheng Wan, Qiao Li, Ping Zhong, Gaohang Yu, Tao Tao . An online conjugate gradient algorithm for large-scale data analysis in machine learning. AIMS Mathematics, 2021, 6(2): 1515-1537. doi: 10.3934/math.2021092
    [10] Jie Guo, Zhong Wan . A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
  • Let Z2 be the two-dimensional integer lattice. For an integer k2, we say a non-zero lattice point in Z2 is k-full if the greatest common divisor of its coordinates is a k-full number. In this paper, we first prove that the density of k-full lattice points in Z2 is ck=p(1p2+p2k), where the product runs over all primes. Then we show that the density of k-full lattice points on a path of an α-random walk in Z2 is almost surely ck, which is independent on α.



    Consider the following system of equations:

    H(x)=0,xψ, (1.1)

    where H:RnRn is a continuous and monotone mapping and ψRn is a non-empty closed and convex set. The monotonicity of H implies that it satisfies the following inequality:

    (H(x)H(y))T(xy)0x,yRn. (1.2)

    In this paper, Rn and . refer to the real n-dimensional space and the Euclidean norm, respectively, while H(xk)=Hk.

    The system described above arises in various fields of science and engineering, as many scientific formulations naturally lead to systems of nonlinear equations. These systems appear in mathematical research areas such as differential equations, optimization theory, and integral equations. They also serve as subproblems in generalized proximal algorithms [1].

    In chemistry, such systems frequently arise in the determination of chemical equilibrium points, while in economics, equilibrium analysis is often conducted in a similar manner [2]. Some variational inequalities are reformulated as monotone systems before being solved [3]. Additionally, these systems have applications in compressed sensing, particularly in signal reconstruction and image de-blurring problems[2,4,5,6].

    These systems also play a crucial role in solving optimal power flow equations [7]. In radiative transfer processes, they are applied through the discretization of the Chandrasekhar integral equation [2]. Furthermore, they are utilized in financial institutions for forecasting and automation [8].

    Numerous methods and techniques have been developed, analyzed, and refined to solve system (1.1). Among these, the most widely used is Newton's method and its improved variants [9]. Newton's and quasi-Newton techniques are commonly employed for solving constrained monotone systems of nonlinear equations [9]. However, despite their popularity, the computational cost and storage requirements associated with the Jacobian matrix (the first derivative of the system) make Newton's method impractical for large-scale systems.

    Quasi-Newton methods, in contrast, approximate the Jacobian matrix rather than computing it explicitly [10,11]. While these methods eliminate the need for exact derivative computations, they require substantial memory for matrix storage and are slower than Newton's method due to limited Jacobian information [10,11]. The steepest descent is among the popular descent methods for solving system (1.1). Although previously explored, it has largely fallen out of favor due to its slow convergence rate [12].

    Given the challenges posed by large-scale systems, researchers have increasingly focused on conjugate gradient (CG) methods. As derivative-free approaches, CG methods are particularly well-suited for solving large-scale systems [13].

    The CG methods are iterative schemes that generate a sequence of iterative points via

    xk+1=xk+αkdk,k=0,1,2,, (1.3)

    where αk is the step length, and dk is the search direction [14]. For CG methods, the search direction is defined as:

    dk={Hk,if k=0,Hk+βksk1,if k1, (1.4)

    where sk1=xkxk1, and βk is the CG parameter.

    The CG parameter βk takes different forms depending on its formulation. For example,

    βCDk=Hk2HTk1sk1, (1.5)

    which is known as the conjugate descent (CD) CG parameter [15].

    The parameter βk is considered the most critical component of any CG direction, as different parameters lead to distinct CG directions with unique convergence properties and numerical performance. While some CG parameters are computationally efficient, others may fail to satisfy the inequality below, which is an essential condition for global convergence [2]:

    HTkdkτHk2,τ>0. (1.6)

    The CD parameter βCDk belongs to the class of classical CG methods, exhibiting good convergence properties but weak numerical performance. To address this limitation, Yang et al. [16] hybridized the Liu-Storey (LS) and CD parameters using Wolfe's line search strategy, resulting in an efficient algorithm. Similarly, Kaelo et al. [17] hybridized the LS, Hestenes-Stiefel (HS), and Dai-Yuan (DY) CG parameters, producing an algorithm that satisfies (1.6). Using the strong Wolfe's line search strategy, the proposed algorithm achieved global convergence.

    Snezana and Djordjevic [18] proposed a new hybrid of the CD and LS methods, ensuring that the hybrid parameter satisfies (1.6). Liu et al. [19] presented a scheme that also satisfies (1.6), introducing an LS-type CG parameter for solving the system (1.1) by applying Powell's strategy to the LS-type scheme [20]. Motivated by the method in [21], Wang et al. [22] developed a three-term CG algorithm for solving (1.1). Similarly, Li Wang [23], building on the FR-type CG iterative scheme, proposed an algorithm for symmetric systems by improving the method of Zhang et al. [24]. This method satisfies (1.6) and converges globally.

    Liu and Li [25], inspired by the work of Yu et al. [26], presented a multivariate spectral algorithm for solving systems of the form (1.1). Acknowledging the weak numerical performance of the classical CD method, Zhang et al. [27] proposed a three-term CG direction satisfying (1.6) and the trust-region property. The proposed method achieved convergence through a modified Weak Wolfe-Powell (MWWP) line search strategy coupled with a projection algorithm, with numerical results outperforming many existing algorithms.

    Recent studies have proposed various CG-based approaches to tackle constrained and unconstrained nonlinear equations. Ibrahim et al. [28] introduced two classes of spectral three-term derivative-free methods that effectively solve nonlinear equations. However, despite their efficiency, these methods struggle with ill-conditioned problems, leading to slow convergence. Similarly, Sabi'u and Sirisubtawee [29] proposed an inertial Dai-Liao conjugate gradient method tailored for convex constrained monotone equations, ensuring better stability and robustness. Nonetheless, inertial methods often require careful parameter tuning to prevent instability.

    Additionally, Zheng et al. [30] developed a conjugate gradient projection method that efficiently handles equations with convex constraints. However, projection-based methods can suffer from increased computational cost, especially in high-dimensional problems [31]. Another notable contribution is by Chankong et al. [32], who formulated a class of derivative-free three-term descent Hestenes-Stiefel conjugate gradient algorithms for constrained nonlinear problems. While these algorithms demonstrate promising performance, they often encounter difficulties in non-smooth or highly nonlinear problems where descent conditions are hard to maintain. These recent advancements accentuate the increasing focus on improving CG-based methods by incorporating new techniques, such as inertial acceleration, spectral properties, and projection methods, to enhance convergence and stability.

    Originally, classical CG methods were designed for solving systems of the form:

    minf(x),xRn, (1.7)

    where f:RnR is a nonlinear and continuously differentiable function, and its second derivative exists. Given their desirable properties and the fact that the first-order optimality condition for (1.7), i.e., f(x)=0, is equivalent to (1.1) whenever f=Hk, all methods applied to (1.7) can be extended to tackle problems of the form (1.1).

    To modify an existing conjugate gradient (CG) parameter or produce a new successful one, two factors must be considered:

    ● The new modified CG parameter must satisfy (1.6), which will ensure global convergence.

    ● It must outperform existing methods computationally by carrying out numerical comparisons with the relevant methods in the literature.

    In [33], Yu and Guan presented a modified version of the DY CG parameter as follows:

    βDDYk=βDYkbHk2(dTk1yk1)2dTk1Hk. (2.1)

    In their effort to prove the convergence of the method, they showed that (1.6) is satisfied for τ=114b, where b14.

    Following the same argument, a modified CD parameter was proposed in [33] and is presented as:

    βDCDk=βCDkbHk2(HTkdk1)2HTkdk1, (2.2)

    and the sufficient descent inequality was proved for b>14.

    Inspired by Yu and Guan's schemes, Yu et al. presented the spectral version of Eqs (2.1) and (2.2) in [34], with CG parameters given by:

    βDSDYk=βSDYkbHk2δk(dTk1yk1)2dTk1Hk, (2.3)

    and

    βDSCDk=βSCDkbHk2δk(HTkdTk1yk1)2HTkdk1, (2.4)

    where b>O, but δk was left as a topic for further research.

    The limitation of these four schemes (2.1)–(2.4) is that they do not satisfy the sufficient descent inequality (1.6) for the interval (0,14], and consequently, they cannot attain global convergence for b(0,14]. Furthermore, the value of b>0 was fixed throughout.

    Motivated by the above and the fact that, to the best of our knowledge, modified CD schemes are limited in the literature, we aim to fill this gap by proposing a scheme that satisfies (1.6) and achieves global convergence for the interval bk(0,), where the value of bk>0 is not fixed.

    Consider the following:

    βMCDk=ξHk2HTk1sk1ξbkHk2(HTk1sk1)2HTksk1, (2.5)

    ξ>0 and bk>0 is a parameter to be determined.

    By (1.4) and (2.5), we have

    dk=HkξHk2HTk1sk1sk1ξbkHk2HTksk1(HTk1sk1)2sk1. (2.6)

    By Perry's idea in [35], (2.6) can be written as

    dk=QkHk. (2.7)

    Here, Qk is called the search direction matrix, which is expressed as

    Qk=I+ξsk1HTkHTk1sk1+ξbkHk2sk1sTk1(HTk1sk1)2. (2.8)

    Qk is non-symmetric, it shall be symmetrized as follows:

    Ak=I+ξsk1HTk+HksTk1HTk1sk1+ξbkHk2sk1sTk1(HTk1sk1)2. (2.9)

    Then (2.7) becomes

    dk=AkHk, (2.10)

    where Ak is the symmetric version of Qk in (2.7).

    Theorem 2.1. The matrix Ak defined in (2.9) has eigenvalues λ+k, λk, and 1 with multiplicity (n2), where

    λ+k=12[(2+2ξpk+ξbkqk)+(ξbkqk+2ξpk)2+4ξ2(qkp2k)] (2.11)

    and

    λk=12[(2+2ξpk+ξbkqk)(ξbkqk+2ξpk)2+4ξ2(qkp2k)], (2.12)

    with

    pk=HTksk1HTk1sk1,qk=Hk2sk12(HTk1sk1)2. (2.13)

    Moreover, the eigenvalues are real and positive.

    Proof. The objective function Hk is not zero unless the solution is reached, as is the vector sk1. Suppose the solution is not reached; then, both Hk and sk1 are nonzero vectors, and hence, HTksk1>0. Let V be the vector space spanned by {Hk,sk1}. The dimensions of V and its orthogonal complement V are dim(V)2 and dim(V)n2, respectively.

    Let a set of mutually orthogonal vectors {ιik1}n2iV satisfy

    HTkιik1=sTk1ιik1fori=1,,n2. (2.14)

    Using (2.9) and (2.14), we obtain

    Akιik1=ιik1. (2.15)

    The above equation is an eigenvector equation, and ιik1, for i=1,2,,n2, is an eigenvector with a corresponding eigenvalue of 1, repeated n2 times. Let the remaining eigenvalues be λ+k and λk. Furthermore, (2.9) can also be written as

    Ak=I+ξsk1HTkHTk1sk1+ξ(bkHk2sk1+HTksk1Hk(HTk1sk1)2)sTk1. (2.16)

    It is clear that (2.16) is a rank-2 update matrix therefore; from Theorem 1.2.16 of [36], the relation

    det(I+a1aT2+a3aT4)=(1+aT1a2)(1+aT3a4)(aT1a4)(aT2a4),

    with a1=ξsk1HTk1sk1, a2=Hk, a3=ξbkHk2sk1+HTk1sk1Hk(HTK1sk1)2 and a4=sk1 can be used to get the determinant matrix of Ak as

    det(Ak)=(ξ(sTk1Hk)(HTk1sk1)+1)2+ξbkHk2sk12(HTK1sk1)2ξ2Hk2sk12(HTk1sk1)2. (2.17)

    Furthermore, it is known that the trace of (2.9) is equivalent to the total number of its eigenvalues; hence we have

    trace(Ak)=1+1++1(n2)times+λ+k+λk=n+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2, (2.18)

    and consequently,

    λ+k+λk=2+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2. (2.19)

    Then by (2.13), (2.17), and (2.19), we obtain

    λ2k(2+2ξpk+ξbkqk)λk+((ξpk+1)2+ξ(bkξ)qk)=0. (2.20)

    Then, using the quadratic formula, we obtain

    λ±k=12[(2+2ξpk+ξbkqk)±(ξbkqk+2ξpk)2+4ξ2(qkp2k)]. (2.21)

    This gives (2.11) and (2.12). Next, we show that λ+k and λk are real and positive. The discriminant is positive since qk>p2k. Therefore, we need to have

    2+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2>0. (2.22)

    λ+k is real and positive when

    bk>2(HTksk1)(HTk1sk1)Hk2sk122(HTk1sk1)2ξHk2sk12. (2.23)

    For λk to be real and positive, we require

    λk=12[(2+2ξpk+ξbkqk)(ξbkqk+2ξpk)2+4ξ2(qkp2k)]>0. (2.24)

    After algebraic simplification, we obtain

    bk>ξ(ξ(HTksk1)Hksk1+HTk1sk1ξHksk1)2. (2.25)

    Now, (2.25) can be written as

    bk=ξφ(ξ(HTksk1)Hksk1+HTk1sk1ξHksk1)2. (2.26)

    with ξ>0 and φ0.

    Here, we present the proposed direction.

    dk={Hk,ifk=0,HkξHk2HTk1sk1sk1ξbkHk2(HTk1sk1)(HTk1sk1)2sk1,ifHTksk1>0andHTk1sk1rHksk1,r>1,Hk+Hk2max{HTk1sk1,γHk1sk1}sk1,γ(0,1),otherwise. (2.27)

    To improve its convergence attributes, bk in (2.26) can be modified to

    bk=ξφ(ξ(HTksk1)Uk+HTk1sk1Vk)2,ξ>0,φ0, (2.28)
    Uk=max{Hksk1,ξHk1sk1},Vk=max{Hk1sk1,ξHksk1}.

    Let the projection mapping be defined as follows before presenting the proposed algorithm.

    Definition 2.1. Let ψRn be a not empty, convex, and closed set. For xRn, the projection of x onto ψ is given by

    Pψ=argmin{xy:yψ}. (2.29)

    Equation (2.29) is called a projection mapping. The projection, Pψ, is nonexpensive, namely,

    Pψ(x)Pψ(y)xy,x,yψ. (2.30)

    and consequently,

    Pψ(x)yxy,yψ. (2.31)

       Input: Input: Given x0ψ, d0=H0, ρ(0,1), σ>0, ϵ=106, and ζ>0, set k=0.
       Step ⅰ: find Hk. If Hkϵ stop, otherwise go to Step ⅱ.
       Step ⅱ: Let αk=ζρmk, where mk is the least positive integer m such that
    H(xk+αkdk)TdkσαkH(xk+αkdk)dk2.(2.32)
          Step ⅲ: Set
    zk=xk+αkdk.(2.33)
          Step ⅳ: If zkψ and H(zk)ϵ, stop else, go to Step ⅴ.
       Step ⅴ: Solve for xk+1 by
    xk+1=Pψ[xkλkH(zk)],(2.34)
          where, λk=(xkzk)TH(zk)H(zk)2.
       Step ⅵ: Get the new dk+1 using (2.27) and (2.28).
       Step ⅶ: Update k=k+1 and repeat Step ⅱ.
       An improved CD Iterative Scheme

    This part presents the analysis of the proposed algorithm UMCD and some certain preassumptions.

    Assumption 3.1. The map H is Lipschitz continuous, i.e., there is a constant L>0 such that

    H(x)H(y)Lxyholdsfor anyx,yRn. (3.1)

    Assumption 3.2. H is strongly monotone, i.e., x,yRn, then, there exists c>0 such that

    (xy)T(H(x)H(y))cxy2,forallx,yRn. (3.2)

    Assumption 3.3. Let a nonempty set ˉS, be the solution set of the system (1.1)

    Lemma 3.1. Let ξ>0,bk>0 and φ0, then k0 the sequence of search directions {dk1} presented by (2.27) with (2.28) satisfies (1.6) where τ=1.

    Proof. From (2.27), it is clear that for k=0, HT0d0=H02. This satisfies (1.6) with τ=1,

    Case Ⅰ. For k>1, If HTksk1>0 and HTk1sk1rHksk1, dTkHk=Hk2ξHk2(HTksk1)HTk1sk1ξbkHk2(HTksTk1)2(HTk1sk1)2Hk2.

    Case Ⅱ. If HTksk1=0, then dTkHk=Hk2+Hk2(HTksk1)max{HTk1sk1,γHksk1} Hk2+Hk2(HTksk1)γHksk1 - Hk2.

    Case Ⅲ. If HTksk1<0, then dTkHk=Hk2+Hk2(HTksk1)max{HTk1sk1,γHksk1} Hk2+Hk2(HTksk1)γHksk1 - Hk2. where (Hk2(HTksk1)γHksk1<0), and hence the required result.

    Lemma 3.2. Let the assumptions hold and {xk},{zk} be sequences defined by the UMCD algorithm, then {xk},{zk} are bounded and

    dkM. (3.3)
    limkxkzk=0. (3.4)
    limkxk+1xk=0. (3.5)

    Proof. Let ˉxˉS be the solution set of the system given by (1.1); then by the monotonicity of H, we obtain

    (xkˉx)TH(zk)(xkzk)TH(zk). (3.6)

    Then, using Eq (2.32) (the line search condition) and the definition of zk, it follows

    (xkzk)TH(zk)σα2kdk2>0. (3.7)

    Also, from (2.34), we have

    xk+1ˉx2=xkλkH(zk)ˉx2=xkˉx22λk(xkˉxk)TH(zk)+λ2kH(zk)2xkˉx22λk(xkzk)TH(zk)+λ2kH(zk)2=xkˉx2((xkzk)TH(zk))2HT(zk)2xkˉx2. (3.8)

    Hence,

    xk+1ˉxxkˉx. (3.9)

    Repeatedly, xkˉxx0ˉx k. This means that {xkˉx} is decreasing and {xk} is bounded. Furthermore, by (1)–(3) and (3.9), we get

    Hk=HkHk(ˉx)LxkˉxLx0ˉx. (3.10)

    Hence

    HkN1forN1=Lx0ˉx. (3.11)

    Then from (2.28)

    |bk|=|ξφ(ξ(HTksk1)Uk+(HTk1sk1)Vk)2||ξ|+|φ||(ξHksk1Uk+Hk1sk1Vk)2|ξ+|φ||(ξHksk1Hksk1+Hk1sk1Hk1sk1)2|=ξ+|φ|(ξ+1)2. (3.12)

    This implies

    |bk|λforλ=ξ+|φ|(ξ+1)2. (3.13)

    From (2.27) and using (2.28), (3.11), and (3.13), we have

    If, HTksk1>0 and HTk1sk1rHksk1,

    dk=Hk(ξHk2HTk1sk1bkHk2(HTksk1)(HTk1sk1)2)sk1Hk+ξHk2HTk1sk1+bkHk3sk1(HTk1sk1)2sk1Hk+ξHk2sk1rHksk1+bkHk3sk12r2Hk2sk12Hk+ξHkr+λHkr2N1+ξN1r+λN1r2. (3.14)

    Let M1=N1+ξN1r+λN1r2.

    Otherwise,

    dk=Hk+Hk2max{HTk1sk1,γHksk1}sk1Hk+Hk2sk1γHksk1N1+λN1γ. (3.15)

    Let M2=N1+N1γ and take M=max{M1,M2}, then the Lemma (3.2) is proved.

    The sequences {dk} and {xk} are bounded; Eq (2.33) shows that {zk} is bounded. Using the same argument as in (3.10), it implies that

    H(zk),is a positive constant. (3.16)

    Equation (3.8) shows that,

    ((xkzk)TH(zk))2Hk2(xkˉx2xk+1ˉx2). (3.17)

    From (2.32), we obtain

    σ2α4kdk4α2k(H(zk)Tdk)2. (3.18)

    By (3.8) and (3.18), we have

    σ2α4kdk4H(zk)4(xkˉx2xx+1ˉx2), (3.19)

    {xkˉx} converged and {H(zk)} is bounded; we can apply the limit on (3.17) to obtain

    σ2limkα4kdk40, (3.20)

    and hence

    limkαkdk=0, (3.21)

    (3.19) and (2.33) indicates that (3.4) is true. However, it is deduced from the definition of λk that

    xk+1xk=xkλkH(zk)xk=λkH(zk)xkzk. (3.22)

    This means Eq (3.5) is proved.

    Theorem 3.1. Let the assumptions (3.1)(3.3), be true and the sequence {xk} be defined by the UMCD algorithm; then

    lim infkHk=0. (3.23)

    Proof. By contradicting the above assumptions, i.e., suppose (3.23) is not true, then

    Hkϵ0is truek>0. (3.24)

    Let dk0 unless when Hk=0, then there exists δ1>0 such that

    dkδ1. (3.25)

    If αkξ, then αk, ρ1αk does not agrees (2.32), i.e.,

    H(xk+ρ1αkdk)Tdk<σρ1αkH(xk+ρ1αkdk)dk2. (3.26)

    Using (3.26) and (1.6), it yields

    τHk2HTkdk=(H(xk+ρ1αkdk)Hk)TdkHk(xk+ρ1αkdk)Tdk,Lρ1αkdk2+σρ1αkdk2=αkdk(L+σ)ρ1dk. (3.27)

    And from (3.27) we obtain

    αkdk>ρL+στHk2dkρL+στϵ20M. (3.28)

    Equation (3.28) contradicts Eq (3.21). Hence, (3.23) is true. This completes the proof.

    This section is divided into two parts. In this part, the numerical results of the proposed algorithm are presented. To measure the strength of the proposed algorithm (UMCD), the following algorithms were used:

    ● MDDYM (2021) algorithm proposed in [37].

    ● DTCG1 (2020) algorithm proposed in [6].

    ● ZANG (2023) algorithm proposed in [38].

    ● MCHCG (2024) algorithm proposed in [39].

    In the second part of this study, we extend the UMCD algorithm to explore its effectiveness in compressive sensing experiments. To rigorously assess the performance of our proposed algorithm, we conduct a signal recovery experiment in conjunction with the CHCG algorithm [2]. The implementation is executed using MATLAB version 8.3.0 (R2014a) on a laptop equipped with a 1.80 GHz processor (CPU) and 8 GB of RAM.

    For the UMCD algorithm, we selected the following key parameters: ξ=1, σ=φ=104, and ρ=ζ=0.9. In comparison, the parameters for the other algorithms are provided in their respective documents. This structured approach allows us to draw meaningful comparisons and insights regarding the performance of the proposed algorithm.

    The iterations of all five algorithms were configured to stop when either |Hk|106, |H(zk)|106 are reached, or the number of iterations exceeds 2000 without meeting the stopping criterion. Moreover, the symbol '–' indicates a failure during the iterative process.

    The numerical experiments for the five algorithms were carried out using eight (8) different initial points and twelve (12) test problems (T1–T12).

    x1=(0.01,0.01,...,0.01)T, x2=(0.25,0.25,...,0.25)T, x3=(0.4,0.4,...,0.4)T, x4=(0.5,0.5,...,0.5)T, x5=(1.25,1.25,...,1.25)T, x6=(0.3,0.3,...,0.3)T, x7=(1,1,...,1)T, and x8=(0.1,0.1,...,0.1)T.

    T1 [6]

    H1(x)=ex11,

    Hi(x)=exi+xi11,i=2,3,4,,n1,

    where ψ=Rn+.

    T2 [5]

    Hi(x)=log(xi+1)xin,i=2,3,,n1,

    where ψ={xRn:ni=1xin,xi>1,i=1,2,3,,n}.

    T3 [37]

    Hi(x)=2xisin|xi|, i=1,2,3,,n,

    where ψ=Rn+.

    T4 [5]

    Hi(x)=cosxi+xi1. i=1,2,3,,n,

    where ψ=Rn+.

    T5 [6]

    Hi(x)=exi1,i=1,2,3,,n,

    where ψ=Rn+.

    T6 [37]

    H1(x)=2x1x2+ex11,

    Hi(x)=xi1+2xixi+1+exi1,

    Hn(x)=xn1+2xn+exn1, i=2,3,4,,n1,

    where ψ={xRn:ni=1xin,xi0,i=1,2,3,,n}.

    T7 [6]

    H1(x)=x1ecos(b(x1+x2)),

    Hi(x)=xiecos(b(xi1+xi+xi+1)), i=2,3,4,,n1,

    Hn(x)=xnecos(b(xn1+xn)),

    b=1n+1, where ψ=Rn+.

    T8 [37]

    Hi(x)=xisin|xi1|, i=1,2,,n,

    where ψ={xRn:ni=1xin,xi>1,i=1,2,3,,n}.

    T9 [40]

    Hi(x)=ex2i+32sin(2xi)1,i=1,2,3,,n,

    where ψ=Rn+.

    T10

    H1(x)=cosx19+3x1+8ex2,

    Hi(x)=cosxi9+3xi+8exi1, i=2,3,4,,n,

    where ψ=Rn+.

    T11 Modification of T1 in [41]

    H1(x)=esinx11,

    Hi(x)=esinxi+xi11,i=2,3,4,,n1,

    where ψ=Rn+.

    T12 Modification of T3 in [41]

    Hi(x)=3xisinxi, i=1,2,3,,n,

    where ψ=Rn+.

    Tables 15 display the performance of the proposed algorithm (UMCD) in comparison with recent CG algorithms: MDDYM [37] and DTCG1 [6]. In the tables, the terms *IP*, *Iter*, and *Fval* refer to the initial point, number of iterations, and function evaluations, respectively. Additionally, *Time(s)* and Hk represent CPU time and the norm of residual functions, respectively.

    Table 1.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 1 and 2.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P1 100 x1 15 16 0.03303 0.00E+00 15 47 0.05952 8.76E-07 1001 1050 0.29023 6.55E+04
    x2 4 5 0.00783 0.00E+00 12 37 0.01407 5.31E-07 1001 1116 0.32494 6.55E+04
    x3 4 5 0.01144 0.00E+00 14 44 0.01315 2.00E-07 37 181 0.05921 6.96E-07
    x4 2 3 0.00478 0.00E+00 14 46 0.02308 7.27E-07 18 133 0.04298 0.00E+00
    x5 4 5 0.01512 0.00E+00 16 51 0.01617 7.30E-07 29 321 0.04569 0.00E+00
    x6 4 5 0.01100 0.00E+00 13 43 0.01019 4.74E-10 29 318 0.06099 0.00E+00
    x7 4 5 0.01278 0.00E+00 11 33 0.02672 3.35E-08 1001 1028 0.29032 6.55E+04
    x8 13 14 0.02215 0.00E+00 18 58 0.01440 6.38E-07 8 46 0.01104 0.00E+00
    10,000 x1 4 5 0.05099 0.00E+00 14 43 0.12490 8.65E-07 103 1155 1.27990 0.00E+00
    x2 18 19 0.21214 0.00E+00 13 42 0.06798 1.72E-07 32 284 0.30602 0.00E+00
    x3 4 5 0.07857 0.00E+00 6 15 0.03300 4.90E-07 16 122 0.25066 0.00E+00
    x4 4 5 0.04852 0.00E+00 6 15 0.03077 1.65E-07 92 1015 1.71821 0.00E+00
    x5 9 10 0.11444 0.00E+00 11 33 0.06174 2.99E-07 25 45 0.15900 9.34E-07
    x6 5 6 0.06893 0.00E+00 10 28 0.05002 2.13E-07 92 1030 1.19462 0.00E+00
    x7 14 15 0.17102 0.00E+00 8 23 0.06200 8.54E-07 9 60 0.13710 0.00E+00
    x8 14 15 0.16226 0.00E+00 17 61 0.11029 9.75E-07 1001 1082 2.36323 6.55E+04
    100,000 x1 6 7 0.50356 0.00E+00 16 46 0.63699 4.68E-07 30 199 1.74338 0.00E+00
    x2 19 20 1.70708 0.00E+00 10 27 0.36805 1.28E-08 28 286 2.20160 0.00E+00
    x3 6 7 0.51893 0.00E+00 7 17 0.29252 1.13E-07 101 1163 8.55412 0.00E+00
    x4 4 5 0.34368 0.00E+00 6 15 0.26364 9.96E-09 26 43 0.63220 8.31E-07
    x5 11 12 0.97077 0.00E+00 10 31 0.41796 5.01E-07 29 72 0.86051 7.39E-07
    x6 11 12 1.01489 0.00E+00 8 21 0.30025 4.92E-08 27 68 0.82920 5.14E-07
    x7 18 19 1.65894 0.00E+00 8 23 0.41963 9.48E-08 29 71 0.84210 5.86E-07
    x8 15 16 1.34973 0.00E+00 11 34 0.42575 7.74E-07 9 58 0.52940 0.00E+00
    P2 100 x1 2 4 0.00584 9.83E-07 5 11 0.01489 8.95E-08 18 20 0.01423 6.20E-07
    x2 4 6 0.00556 3.31E-07 7 15 0.01311 1.74E-07 23 25 0.01436 6.10E-07
    x3 5 7 0.00835 4.29E-07 6 12 0.00810 9.99E-07 24 26 0.01319 5.57E-07
    x4 5 7 0.00640 9.76E-07 14 38 0.04578 9.56E-07 24 26 0.04415 7.58E-07
    x5 6 8 0.00936 1.29E-07 9 18 0.02229 1.10E-07 26 28 0.01594 8.44E-07
    x6 6 8 0.01050 9.05E-07 10 26 0.00969 8.72E-07 23 25 0.01364 7.66E-07
    x7 5 7 0.00931 7.25E-07 14 30 0.03262 7.75E-07 26 28 0.03901 5.62E-07
    x8 4 6 0.00662 9.39E-07 9 25 0.01071 5.40E-08 21 23 0.02760 8.49E-07
    10,000 x1 3 5 0.02577 2.11E-07 5 11 0.03185 2.71E-07 19 21 0.08529 9.65E-07
    x2 4 6 0.02683 9.71E-07 7 15 0.04860 5.14E-07 24 26 0.11387 9.45E-07
    x3 6 8 0.03915 2.67E-08 7 13 0.03646 8.67E-08 25 27 0.09513 8.62E-07
    x4 6 8 0.03951 1.19E-07 14 38 0.08905 1.68E-07 26 28 0.10267 5.86E-07
    x5 6 8 0.04087 3.76E-07 9 18 0.07042 4.65E-07 28 30 0.18724 6.52E-07
    x6 7 9 0.05484 2.58E-07 11 27 0.07024 7.88E-08 25 27 0.14486 5.93E-07
    x7 6 8 0.03782 1.59E-07 15 32 0.08824 3.92E-09 27 29 0.11938 8.68E-07
    x8 5 7 0.03146 1.62E-07 12 32 0.07698 5.56E-08 23 25 0.12050 6.59E-07
    100,000 x1 3 5 0.15491 6.43E-07 5 11 0.21010 8.52E-07 21 23 0.53615 7.62E-07
    x2 5 7 0.24336 1.95E-08 8 16 0.35311 2.72E-08 26 28 0.66623 7.46E-07
    x3 6 8 0.25928 8.34E-08 7 13 0.32370 2.70E-07 27 29 0.69745 6.80E-07
    x4 6 8 0.26963 3.73E-07 17 45 0.72980 5.08E-08 27 29 0.72102 9.25E-07
    x5 7 9 0.29765 3.60E-08 10 19 0.37575 3.93E-08 30 32 0.78366 5.14E-07
    x6 7 9 0.35906 8.12E-07 11 27 0.47475 2.47E-07 26 28 0.66078 9.36E-07
    x7 6 8 0.25959 4.99E-07 15 32 0.57791 2.94E-08 29 31 0.76046 6.85E-07
    x8 5 7 0.21840 5.11E-07 12 32 0.56030 1.73E-07 25 27 0.64154 5.20E-07

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 3 and 4.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P3 100 x1 1 3 0.00405 5.21E-07 5 11 0.00649 7.68E-08 18 20 0.00967 6.03E-07
    x2 4 6 0.00493 6.00E-07 4 6 0.00522 5.22E-08 22 24 0.01377 9.29E-07
    x3 5 7 0.00535 8.96E-08 4 6 0.00741 1.27E-07 23 25 0.02699 7.28E-07
    x4 5 7 0.00573 2.23E-07 4 6 0.00531 2.71E-08 23 25 0.01227 8.92E-07
    x5 4 6 0.00651 2.15E-07 9 21 0.00871 3.02E-07 24 26 0.01291 8.51E-07
    x6 5 7 0.00544 1.86E-08 4 6 0.00515 9.44E-08 23 25 0.01735 5.54E-07
    x7 5 7 0.00836 4.54E-07 9 23 0.01367 4.88E-07 24 26 0.01546 7.63E-07
    x8 3 5 0.00551 8.29E-08 3 5 0.00485 3.80E-09 21 23 0.01426 7.52E-07
    10,000 x1 2 4 0.01347 1.65E-07 5 11 0.02473 2.43E-07 19 21 0.10561 9.54E-07
    x2 5 7 0.02757 1.93E-08 4 6 0.02067 1.65E-07 24 26 0.06887 7.35E-07
    x3 5 7 0.02884 2.83E-07 4 6 0.01880 4.02E-07 25 27 0.07398 5.75E-07
    x4 5 7 0.02524 7.06E-07 4 6 0.01674 8.56E-08 25 27 0.11716 7.05E-07
    x5 4 6 0.02281 6.79E-07 9 21 0.05192 9.55E-07 26 28 0.08110 6.73E-07
    x6 5 7 0.02662 5.89E-08 4 6 0.01662 2.99E-07 24 26 0.06970 8.76E-07
    x7 6 8 0.03420 8.57E-08 10 24 0.06219 5.62E-08 26 28 0.08495 6.03E-07
    x8 3 5 0.01816 2.62E-07 3 5 0.02704 1.20E-08 23 25 0.06769 5.95E-07
    100,000 x1 2 4 0.09912 5.21E-07 5 11 0.17670 7.68E-07 21 23 0.42251 7.54E-07
    x2 5 7 0.18389 6.11E-08 4 6 0.11846 5.22E-07 26 28 0.51949 5.81E-07
    x3 5 7 0.19643 8.96E-07 5 7 0.13611 3.34E-08 26 28 0.65490 9.10E-07
    x4 6 8 0.20836 1.35E-07 4 6 0.14656 2.71E-07 27 29 0.58802 5.58E-07
    x5 5 7 0.19077 5.24E-09 10 22 0.35069 9.62E-08 28 30 0.61661 5.32E-07
    x6 5 7 0.18573 1.86E-07 4 6 0.11910 9.44E-07 26 28 0.52935 6.93E-07
    x7 6 8 0.22130 2.71E-07 10 24 0.32169 1.78E-07 27 29 0.54962 9.53E-07
    x8 3 5 0.12949 8.29E-07 3 5 0.12552 3.80E-08 24 26 0.48600 9.40E-07
    P4 100 x1 2 4 0.00421 6.79E-07 5 11 0.01502 8.51E-08 18 20 0.01046 6.09E-07
    x2 6 8 0.00636 7.19E-07 10 26 0.01235 3.96E-07 23 25 0.01253 6.12E-07
    x3 5 7 0.00568 5.71E-07 8 16 0.01254 1.54E-07 24 26 0.01194 5.79E-07
    x4 5 7 0.00797 2.11E-08 14 32 0.01171 9.25E-08 24 26 0.01520 8.15E-07
    x5 5 7 0.00859 8.54E-07 13 29 0.01011 3.14E-08 27 29 0.01864 7.33E-07
    x6 5 7 0.00707 2.46E-07 6 12 0.00468 1.16E-07 23 25 0.01120 7.76E-07
    x7 6 8 0.00707 2.00E-08 13 30 0.01238 5.36E-08 26 28 0.01676 7.95E-07
    x8 4 6 0.00606 9.67E-07 10 26 0.01546 6.07E-08 21 23 0.01189 8.35E-07
    10,000 x1 3 5 0.01823 2.05E-07 5 11 0.02307 2.69E-07 19 21 0.05790 9.63E-07
    x2 7 9 0.04195 2.07E-07 11 27 0.05787 9.78E-09 24 26 0.07274 9.67E-07
    x3 6 8 0.03112 1.03E-07 8 16 0.08280 4.87E-07 25 27 0.07898 9.16E-07
    x4 5 7 0.02690 6.67E-08 14 32 0.06476 2.93E-07 26 28 0.07790 6.44E-07
    x5 6 8 0.03229 2.15E-07 13 29 0.07800 9.93E-08 29 31 0.08593 5.79E-07
    x6 5 7 0.02941 7.78E-07 6 12 0.02703 3.67E-07 25 27 0.07716 6.13E-07
    x7 6 8 0.03457 6.32E-08 13 30 0.08342 1.70E-07 28 30 0.08366 6.28E-07
    x8 5 7 0.02739 1.69E-07 10 26 0.09130 1.92E-07 23 25 0.07009 6.60E-07
    100,000 x1 3 5 0.13002 6.49E-07 5 11 0.17798 8.51E-07 21 23 0.46367 7.62E-07
    x2 7 9 0.28717 6.55E-07 11 27 0.40900 3.09E-08 26 28 0.52167 7.65E-07
    x3 6 8 0.22485 3.27E-07 9 17 0.27010 2.59E-08 27 29 0.55878 7.24E-07
    x4 5 7 0.19709 2.11E-07 14 32 0.43011 9.25E-07 28 30 0.57757 5.09E-07
    x5 6 8 0.23647 6.81E-07 13 29 0.43131 3.14E-07 30 32 0.62163 9.16E-07
    x6 6 8 0.21179 4.71E-08 7 13 0.22683 5.03E-08 26 28 0.55034 9.70E-07
    x7 6 8 0.22048 2.00E-07 13 30 0.46398 5.36E-07 29 31 0.59930 9.94E-07
    x8 5 7 0.19480 5.33E-07 10 26 0.38605 6.07E-07 25 27 0.51780 5.22E-07

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 5 and 6.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P5 100 x1 3 5 0.00521 5.77E-08 3 5 0.00726 1.13E-08 18 20 0.00915 5.97E-07
    x2 4 6 0.00448 6.42E-07 5 9 0.01017 1.50E-08 22 24 0.00982 7.30E-07
    x3 4 6 0.00692 9.05E-09 8 18 0.01116 3.65E-08 22 24 0.01166 9.94E-07
    x4 5 7 0.00742 1.53E-07 21 62 0.01553 5.63E-07 23 25 0.01444 5.54E-07
    x5 5 7 0.00746 1.75E-07 9 22 0.02207 1.03E-08 18 20 0.01136 5.80E-07
    x6 5 7 0.00686 1.15E-07 7 15 0.00694 1.46E-08 22 24 0.00949 8.31E-07
    x7 5 7 0.00876 5.88E-07 10 21 0.00613 4.14E-08 22 24 0.01248 9.21E-07
    x8 5 7 0.00641 4.86E-08 2 4 0.00849 2.99E-08 21 23 0.01139 6.82E-07
    10,000 x1 3 5 0.02278 1.82E-07 3 5 0.01208 3.57E-08 19 21 0.05288 9.44E-07
    x2 5 7 0.02484 2.31E-08 5 9 0.03458 4.76E-08 24 26 0.06366 5.77E-07
    x3 4 6 0.01991 2.86E-08 8 18 0.03641 1.16E-07 24 26 0.06583 7.86E-07
    x4 5 7 0.03262 4.84E-07 22 65 0.09893 9.08E-07 24 26 0.08210 8.76E-07
    x5 5 7 0.03566 5.55E-07 9 22 0.04446 3.27E-08 19 21 0.09265 9.17E-07
    x6 5 7 0.02447 3.64E-07 7 15 0.02887 4.62E-08 24 26 0.08414 6.57E-07
    x7 6 8 0.04114 4.21E-08 10 21 0.03511 1.31E-07 24 26 0.06352 7.28E-07
    x8 5 7 0.02399 1.54E-07 2 4 0.02380 9.45E-08 23 25 0.06055 5.39E-07
    100,000 x1 3 5 0.10503 5.77E-07 3 5 0.09026 1.13E-07 21 23 0.41393 7.46E-07
    x2 5 7 0.16283 7.31E-08 5 9 0.12978 1.50E-07 25 27 0.42275 9.13E-07
    x3 4 6 0.14198 9.05E-08 8 18 0.24298 3.65E-07 26 28 0.47482 6.21E-07
    x4 6 8 0.18711 9.98E-08 23 67 0.68963 2.67E-08 26 28 0.41348 6.93E-07
    x5 6 8 0.21443 1.30E-07 9 22 0.42998 1.03E-07 21 23 0.34320 7.25E-07
    x6 6 8 0.17874 7.04E-08 7 15 0.25166 1.46E-07 26 28 0.42702 5.20E-07
    x7 6 8 0.23317 1.33E-07 10 21 0.23967 4.14E-07 26 28 0.47965 5.76E-07
    x8 5 7 0.15034 4.86E-07 2 4 0.05786 2.99E-07 24 26 0.42801 8.52E-07
    P6 100 x1 3 4 0.00839 0.00E+00 46 338 0.08394 0.00E+00 5 17 0.01222 0.00E+00
    x2 3 4 0.00855 0.00E+00 1000 1000 10.00000 NAN 4 27 0.01917 0.00E+00
    x3 3 4 0.00985 0.00E+00 1000 1000 1.00000 NAN 4 28 0.00795 0.00E+00
    x4 3 4 0.01189 0.00E+00 152 980 0.17760 9.03E-07 4 30 0.02202 0.00E+00
    x5 2 3 0.00952 0.00E+00 7 56 0.01307 0.00E+00 2 15 0.00794 0.00E+00
    x6 3 4 0.00860 0.00E+00 34 239 0.07584 0.00E+00 4 27 0.01482 0.00E+00
    x7 2 3 0.00995 0.00E+00 140 908 0.20564 9.46E-07 3 26 0.01125 0.00E+00
    x8 3 4 0.01060 0.00E+00 33 245 0.06110 0.00E+00 4 27 0.01564 0.00E+00
    10,000 x1 3 4 0.05055 0.00E+00 10 73 0.14858 0.00E+00 4 27 0.08237 0.00E+00
    x2 3 4 0.05251 0.00E+00 8 57 0.11160 0.00E+00 3 15 0.09650 0.00E+00
    x3 3 4 0.05212 0.00E+00 9 65 0.14487 0.00E+00 3 15 0.03633 0.00E+00
    x4 2 3 0.03333 0.00E+00 11 76 0.13786 0.00E+00 4 27 0.06110 0.00E+00
    x5 2 3 0.04027 0.00E+00 4 35 0.07081 0.00E+00 2 14 0.04709 0.00E+00
    x6 2 3 0.03202 0.00E+00 8 62 0.14194 0.00E+00 3 15 0.07708 0.00E+00
    x7 2 3 0.04137 0.00E+00 8 66 0.13983 0.00E+00 2 14 0.02688 0.00E+00
    x8 3 4 0.05385 0.00E+00 14 94 0.18096 0.00E+00 3 15 0.05561 0.00E+00
    100,000 x1 3 4 0.42789 0.00E+00 11 76 1.30552 0.00E+00 3 15 0.34265 0.00E+00
    x2 3 4 0.43296 0.00E+00 5 39 0.65111 0.00E+00 3 15 0.27402 0.00E+00
    x3 2 3 0.24661 0.00E+00 4 29 0.51169 0.00E+00 3 15 0.27954 0.00E+00
    x4 2 3 0.25837 0.00E+00 10 80 1.24010 0.00E+00 3 15 0.26327 0.00E+00
    x5 2 3 0.45543 0.00E+00 3 24 0.36586 0.00E+00 2 14 0.23537 0.00E+00
    x6 2 3 0.30170 0.00E+00 4 29 0.44399 0.00E+00 3 15 0.31806 0.00E+00
    x7 2 3 0.35130 0.00E+00 6 48 0.82400 0.00E+00 2 14 0.22764 0.00E+00
    x8 3 4 0.42526 0.00E+00 9 73 1.14171 0.00E+00 3 15 0.26653 0.00E+00

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 7 and 8.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P7 100 x1 2 3 0.00983 0.00E+00 10 40 0.01654 9.18E-07 3 21 0.01528 9.21E-08
    x2 4 5 0.01344 0.00E+00 8 31 0.01130 6.62E-07 4 25 0.01546 2.26E-07
    x3 3 4 0.00996 0.00E+00 10 37 0.02569 8.05E-08 4 24 0.00975 4.24E-07
    x4 2 3 0.01092 0.00E+00 9 32 0.01675 8.46E-07 4 23 0.00892 1.88E-07
    x5 3 4 0.01234 0.00E+00 12 39 0.01611 4.17E-07 5 26 0.00956 8.28E-08
    x6 2 3 0.01022 0.00E+00 8 30 0.01357 8.53E-07 4 24 0.01195 1.77E-08
    x7 2 3 0.01010 0.00E+00 11 38 0.02002 1.56E-07 4 21 0.01527 8.27E-07
    x8 2 3 0.00768 0.00E+00 20 84 0.02336 7.22E-07 4 26 0.01213 6.94E-08
    10,000 x1 2 3 0.06824 0.00E+00 11 43 0.14888 1.47E-07 3 21 0.12112 2.72E-07
    x2 4 5 0.10523 0.00E+00 9 34 0.15687 4.27E-07 4 25 0.05985 6.66E-07
    x3 3 4 0.07267 0.00E+00 10 37 0.10337 2.21E-07 5 29 0.07142 1.98E-08
    x4 2 3 0.04905 0.00E+00 11 40 0.10614 4.79E-07 4 23 0.09871 5.47E-07
    x5 3 4 0.07045 0.00E+00 13 42 0.12128 9.73E-08 5 26 0.06463 2.39E-07
    x6 2 3 0.05264 0.00E+00 6 23 0.09211 9.73E-07 4 24 0.07148 4.95E-08
    x7 2 3 0.04642 0.00E+00 11 38 0.13869 5.50E-07 5 26 0.06866 3.71E-08
    x8 2 3 0.06676 0.00E+00 19 79 0.18702 1.47E-07 4 26 0.10791 2.04E-07
    100,000 x1 2 3 0.40501 0.00E+00 11 43 0.90189 4.68E-07 3 21 0.36597 8.55E-07
    x2 4 5 0.81463 0.00E+00 10 38 0.82053 6.24E-07 5 30 0.53294 3.31E-08
    x3 3 4 0.58979 0.00E+00 10 37 0.74230 6.87E-07 5 29 0.59631 6.21E-08
    x4 2 3 0.40452 0.00E+00 12 43 0.94642 4.38E-08 5 28 0.47195 2.72E-08
    x5 3 4 0.54303 0.00E+00 13 42 0.94394 3.09E-07 5 26 0.46243 7.49E-07
    x6 2 3 0.37559 0.00E+00 12 43 0.93140 1.86E-07 4 24 0.53569 1.55E-07
    x7 2 3 0.36851 0.00E+00 12 41 0.94784 6.84E-07 5 26 0.48445 1.16E-07
    x8 2 3 0.36777 0.00E+00 19 79 1.60547 5.84E-07 4 26 0.44721 6.40E-07
    P8 100 x1 5 7 0.01136 9.18E-10 9 25 0.01476 3.16E-07 6 8 0.02509 3.03E-07
    x2 5 7 0.01256 2.65E-08 8 21 0.02121 2.11E-07 6 8 0.00575 9.77E-08
    x3 5 7 0.01537 1.47E-07 7 19 0.01729 7.34E-07 5 7 0.00825 4.28E-07
    x4 3 5 0.00977 2.96E-07 6 17 0.01466 4.67E-07 4 6 0.00492 6.75E-07
    x5 5 7 0.01473 2.63E-07 15 48 0.02332 4.57E-07 6 8 0.00507 5.95E-07
    x6 4 6 0.01137 9.69E-07 8 21 0.01054 1.83E-07 6 8 0.00733 7.04E-08
    x7 5 7 0.01011 2.87E-07 6 15 0.00987 3.03E-07 5 7 0.00516 6.75E-07
    x8 5 7 0.01265 1.20E-07 7 19 0.00618 7.72E-08 6 8 0.00804 2.09E-07
    10,000 x1 5 7 0.06854 2.90E-09 9 25 0.08002 9.99E-07 6 8 0.02902 9.58E-07
    x2 5 7 0.06297 8.37E-08 8 21 0.04202 6.66E-07 6 8 0.02848 3.09E-07
    x3 5 7 0.07992 4.64E-07 8 21 0.03989 2.11E-07 6 8 0.03310 8.64E-08
    x4 3 5 0.04594 9.38E-07 7 19 0.04275 2.07E-07 5 7 0.02787 1.36E-07
    x5 5 7 0.06013 8.32E-07 16 50 0.11420 2.29E-07 7 9 0.03078 1.20E-07
    x6 5 7 0.06760 2.76E-07 8 21 0.04725 5.79E-07 6 8 0.02689 2.22E-07
    x7 5 7 0.06589 9.06E-07 6 15 0.03220 9.58E-07 6 8 0.04191 1.36E-07
    x8 5 7 0.06528 3.80E-07 7 19 0.04509 2.44E-07 6 8 0.02953 6.62E-07
    100,000 x1 5 7 0.47364 9.18E-09 10 27 0.45492 1.76E-07 7 9 0.22843 1.93E-07
    x2 5 7 0.46010 2.65E-07 9 23 0.40843 1.38E-07 6 8 0.27425 9.77E-07
    x3 6 8 0.58311 8.89E-08 8 21 0.35741 6.68E-07 6 8 0.23392 2.73E-07
    x4 4 6 0.38535 5.46E-08 7 19 0.31560 6.55E-07 5 7 0.18196 4.31E-07
    x5 6 8 0.52697 1.25E-07 16 50 0.76155 7.23E-07 7 9 0.27878 3.80E-07
    x6 5 7 0.48573 8.73E-07 9 23 0.38114 1.55E-07 6 8 0.20102 7.04E-07
    x7 6 8 0.57955 7.26E-08 7 17 0.30548 7.91E-09 6 8 0.22101 4.31E-07
    x8 6 8 0.53726 4.92E-08 7 19 0.27165 7.72E-07 7 9 0.22957 1.34E-07

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 9 and 10.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P9 100 x1 1 2 0.00576 0.00E+00 3 13 0.01774 2.29E-08 4 51 0.02559 4.25E-08
    x2 1 2 0.00585 0.00E+00 11 55 0.01757 9.95E-07 5 62 0.01398 4.79E-08
    x3 1 2 0.00608 0.00E+00 2 6 0.01028 0.00E+00 5 63 0.02335 9.88E-08
    x4 1 2 0.00350 0.00E+00 1 2 0.00265 0.00E+00 5 63 0.01456 9.45E-08
    x5 1 2 0.00404 0.00E+00 9 42 0.02898 6.14E-07 1 6 0.00465 0.00E+00
    x6 1 2 0.00617 0.00E+00 6 28 0.00710 1.17E-07 5 62 0.01665 3.89E-08
    x7 1 2 0.00327 0.00E+00 1 3 0.00285 0.00E+00 1 3 0.00293 0.00E+00
    x8 1 2 0.00471 0.00E+00 2 10 0.00989 1.96E-07 5 62 0.02565 4.05E-08
    10,000 x1 1 2 0.01833 0.00E+00 3 13 0.03811 7.25E-08 4 51 0.06791 1.34E-07
    x2 1 2 0.02078 0.00E+00 12 58 0.10703 4.37E-08 5 62 0.13277 1.52E-07
    x3 1 2 0.02853 0.00E+00 2 6 0.01617 0.00E+00 5 63 0.08501 3.13E-07
    x4 1 2 0.00851 0.00E+00 1 2 0.01422 0.00E+00 5 63 0.16091 2.99E-07
    x5 1 2 0.01172 0.00E+00 10 45 0.12989 3.06E-08 1 6 0.01353 0.00E+00
    x6 1 2 0.02217 0.00E+00 6 28 0.06078 3.69E-07 5 62 0.08562 1.23E-07
    x7 1 2 0.01247 0.00E+00 1 3 0.01702 0.00E+00 1 3 0.01811 0.00E+00
    x8 1 2 0.02245 0.00E+00 2 10 0.01802 6.18E-07 5 62 0.15127 1.28E-07
    100,000 x1 1 2 0.13106 0.00E+00 3 13 0.16577 2.29E-07 4 51 0.44893 4.25E-07
    x2 1 2 0.15448 0.00E+00 12 58 0.67073 1.38E-07 5 62 0.54013 4.79E-07
    x3 1 2 0.15167 0.00E+00 2 6 0.11161 0.00E+00 5 63 0.54475 9.88E-07
    x4 1 2 0.04368 0.00E+00 1 2 0.04261 0.00E+00 5 63 0.66749 9.45E-07
    x5 1 2 0.07560 0.00E+00 10 45 0.57314 9.68E-08 1 6 0.14915 0.00E+00
    x6 1 2 0.16841 0.00E+00 7 31 0.37625 3.90E-09 5 62 0.57907 3.89E-07
    x7 1 2 0.07196 0.00E+00 1 3 0.06191 0.00E+00 1 3 0.08504 0.00E+00
    x8 1 2 0.13712 0.00E+00 3 13 0.21464 7.38E-08 5 62 0.57875 4.05E-07
    P8 100 x1 1 2 0.00755 0.00E+00 3 17 0.00671 3.90E-08 1 13 0.00857 0.00E+00
    x2 1 2 0.00509 0.00E+00 13 77 0.01799 3.98E-08 1 13 0.00462 0.00E+00
    x3 1 2 0.00464 0.00E+00 5 27 0.01040 4.93E-07 1 13 0.00612 0.00E+00
    x4 1 2 0.00671 0.00E+00 14 83 0.05777 1.15E-08 1 13 0.00899 0.00E+00
    x5 1 2 0.00667 0.00E+00 12 66 0.03522 2.49E-07 1 13 0.01943 0.00E+00
    x6 1 2 0.00667 0.00E+00 9 51 0.01460 3.71E-08 1 13 0.01309 0.00E+00
    x7 1 2 0.00474 0.00E+00 13 71 0.04436 2.83E-07 1 13 0.00668 0.00E+00
    x8 1 2 0.00468 0.00E+00 4 21 0.00680 1.48E-07 1 13 0.00825 0.00E+00
    10,000 x1 1 2 0.02513 0.00E+00 3 17 0.03592 1.23E-07 1 13 0.04248 0.00E+00
    x2 1 2 0.02792 0.00E+00 13 77 0.20699 1.26E-07 1 13 0.02220 0.00E+00
    x3 1 2 0.02721 0.00E+00 6 31 0.06834 1.87E-08 1 13 0.04022 0.00E+00
    x4 1 2 0.02680 0.00E+00 14 83 0.15190 3.63E-08 1 13 0.04513 0.00E+00
    x5 1 2 0.02727 0.00E+00 12 66 0.12581 7.88E-07 1 13 0.02469 0.00E+00
    x6 1 2 0.03042 0.00E+00 9 51 0.09754 1.17E-07 1 13 0.04831 0.00E+00
    x7 1 2 0.02655 0.00E+00 13 71 0.13567 8.94E-07 1 13 0.04894 0.00E+00
    x8 1 2 0.02428 0.00E+00 4 21 0.04477 4.67E-07 1 13 0.05580 0.00E+00
    100,000 x1 1 2 0.17512 0.00E+00 3 17 0.27594 3.90E-07 1 13 0.16893 0.00E+00
    x2 1 2 0.19268 0.00E+00 13 77 1.08727 3.98E-07 1 13 0.19849 0.00E+00
    x3 1 2 0.21737 0.00E+00 6 31 0.53243 5.92E-08 1 13 0.21012 0.00E+00
    x4 1 2 0.20273 0.00E+00 14 83 1.17916 1.15E-07 1 13 0.21843 0.00E+00
    x5 1 2 0.19913 0.00E+00 13 70 1.19117 2.46E-08 1 13 0.19528 0.00E+00
    x6 1 2 0.19263 0.00E+00 9 51 0.74924 3.71E-07 1 13 0.18907 0.00E+00
    x7 1 2 0.19943 0.00E+00 14 75 1.10069 1.27E-07 1 13 0.21608 0.00E+00
    x8 1 2 0.18627 0.00E+00 5 25 0.37949 4.78E-08 1 13 0.16808 0.00E+00

     | Show Table
    DownLoad: CSV

    To ensure fairness, the five algorithms were run with the same initial starting points and dimensions. As observed from the tables, the UMCD algorithm solves all the test functions with the fewest number of iterations, CPU time, and function evaluations compared to the DTCG1 and MDDYM algorithms. Based on these results, the UMCD algorithm outperformed both MDDYM and DTCG1.

    Table 6 presents the numerical comparison of the proposed algorithm with the existing CHCG and ZANG algorithms using problem Problems 11 and 12. In the table, significant performance of the proposed algorithm is observed over MCHCG and ZANG algorithms. In view of the above, the proposed method is considered as the optimal choice due to its excellence in the number of iterations and overall CPU time efficiency.

    Table 6.  Experiments of UMCD, MCHCG & ZANG methods for problems 11 & 12.
    Problem Dim IP UMCD MCHCG ZANG
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    11 1000 x1 15 16 0.03816 0.00E+00 6 7 0.02527 1.50E-07 10 12 0.02170 6.42E-07
    x2 9 10 0.03703 0.00E+00 6 8 0.02020 3.29E-07 13 14 0.00902 7.17E-07
    x3 4 5 0.00993 0.00E+00 6 8 0.02159 1.43E-07 13 14 0.01132 9.79E-07
    x4 4 5 0.01202 0.00E+00 6 8 0.02249 6.13E-07 13 15 0.01018 4.40E-07
    x5 4 5 0.01421 0.00E+00 6 8 0.02213 2.25E-07 13 15 0.01131 8.72E-07
    x6 4 5 0.01246 0.00E+00 6 8 0.02175 2.78E-07 13 14 0.01177 8.16E-07
    x7 2 3 0.00902 0.00E+00 6 8 0.02330 7.48E-07 13 15 0.00834 6.47E-07
    x8 4 5 0.01425 0.00E+00 6 8 0.02475 1.15E-07 12 14 0.01199 4.69E-07
    10,000 x1 4 5 0.06147 0.00E+00 6 7 0.11607 2.81E-07 11 13 0.04875 5.18E-07
    x2 14 15 0.18633 0.00E+00 6 8 0.13784 9.77E-07 13 15 0.05500 7.83E-07
    x3 9 10 0.11854 0.00E+00 6 8 0.13576 3.88E-07 14 15 0.06108 7.60E-07
    x4 4 5 0.07767 0.00E+00 8 9 0.15722 1.02E-07 14 15 0.05532 8.69E-07
    x5 4 5 0.05527 0.00E+00 8 9 0.14186 1.25E-07 14 16 0.05873 7.08E-07
    x6 13 14 0.17925 0.00E+00 6 8 0.12612 8.07E-07 13 15 0.05752 8.97E-07
    x7 2 3 0.03364 0.00E+00 7 9 0.15516 3.37E-07 14 16 0.05924 5.33E-07
    x8 6 7 0.09767 0.00E+00 6 8 0.14821 9.00E-08 13 14 0.05091 9.12E-07
    100,000 x1 6 7 0.60125 0.00E+00 6 7 0.79950 7.90E-07 12 14 0.33064 4.44E-07
    x2 6 7 0.56274 0.00E+00 8 9 1.16167 2.30E-07 14 16 0.37496 6.69E-07
    x3 18 19 1.77732 0.00E+00 7 9 1.10606 9.02E-08 14 16 0.38147 9.34E-07
    x4 15 16 1.46303 0.00E+00 5 7 0.83329 1.90E-07 15 16 0.39235 7.43E-07
    x5 8 9 0.74268 9.80E-07 7 9 1.14606 3.13E-07 15 17 0.40514 6.08E-07
    x6 6 7 0.61239 0.00E+00 8 9 1.13741 1.89E-07 14 16 0.38782 7.66E-07
    x7 2 3 0.24444 0.00E+00 9 10 1.40988 2.37E-07 15 17 0.41061 4.59E-07
    x8 5 6 0.40187 0.00E+00 6 8 0.96455 1.73E-07 14 15 0.36014 7.78E-07
    12 1000 x1 3 5 0.00895 1.51E-07 5 7 0.01557 9.09E-08 10 12 0.00909 5.96E-07
    x2 5 7 0.02239 7.73E-07 6 8 0.01930 1.37E-07 13 14 0.01035 7.64E-07
    x3 6 8 0.01628 1.05E-07 6 8 0.01651 1.46E-07 13 15 0.01111 4.72E-07
    x4 5 7 0.02212 6.27E-07 6 8 0.02404 1.15E-07 13 15 0.01170 5.77E-07
    x5 6 8 0.02581 2.08E-07 6 8 0.01971 9.05E-07 14 15 0.01159 7.14E-07
    x6 5 7 0.03400 4.33E-07 6 8 0.01731 1.48E-07 13 14 0.01135 9.11E-07
    x7 5 7 0.01919 9.13E-11 6 8 0.01682 8.10E-07 13 15 0.01143 9.57E-07
    x8 4 6 0.00910 5.58E-08 5 7 0.01524 8.43E-07 12 14 0.01086 4.46E-07
    10,000 x1 3 5 0.04222 4.77E-07 5 7 0.10146 2.87E-07 11 13 0.04298 5.16E-07
    x2 6 8 0.07704 1.33E-07 6 8 0.11393 4.33E-07 13 15 0.05166 9.55E-07
    x3 6 8 0.08050 3.31E-07 6 8 0.11742 4.61E-07 14 16 0.05176 4.09E-07
    x4 6 8 0.07692 1.53E-07 6 8 0.11293 3.62E-07 14 16 0.05321 5.00E-07
    x5 6 8 0.08210 6.59E-07 7 9 0.13174 2.17E-07 14 16 0.05245 8.92E-07
    x6 6 8 0.07382 5.77E-08 6 8 0.12074 4.66E-07 14 15 0.04957 7.89E-07
    x7 5 7 0.06370 2.89E-10 7 9 0.13564 1.93E-07 14 16 0.05402 8.30E-07
    x8 4 6 0.07024 1.76E-07 6 8 0.11668 2.09E-07 13 14 0.04749 9.79E-07
    100,000 x1 4 6 0.37268 4.97E-08 5 7 0.77854 8.72E-07 12 14 0.30186 4.47E-07
    x2 6 8 0.57496 4.20E-07 7 9 1.01415 1.06E-07 14 16 0.34227 8.27E-07
    x3 7 9 0.66245 3.98E-08 7 9 1.02851 1.11E-07 15 16 0.35078 8.97E-07
    x4 6 8 0.57558 4.83E-07 7 9 1.02459 8.73E-08 15 17 0.36453 4.33E-07
    x5 7 9 0.66371 6.60E-08 7 9 1.03310 6.82E-07 15 17 0.36946 7.73E-07
    x6 6 8 0.55160 1.83E-07 7 9 1.00286 1.13E-07 14 16 0.35139 9.86E-07
    x7 5 7 0.48683 9.13E-10 7 9 1.00712 6.10E-07 15 17 0.36532 7.19E-07
    x8 4 6 0.38062 5.58E-07 6 8 0.85922 6.62E-07 14 15 0.32884 8.48E-07

     | Show Table
    DownLoad: CSV

    Furthermore, using Dolan and Moreˊ performance profile [42], Figures 13 were plotted using the data from Tables 15. Each figure illustrates the percentage P(\tau) of problems where a specific method works within a factor \tau of the best possible execution time. Figure 1 compares the proposed algorithm's (UMCD) performance with that of the MDDYM and DTCG1 algorithms based on the number of iterations. Figures 2 and 3 compare the performances of the three algorithms based on CPU time and function evaluations, respectively. From the graphs, it is clearly evident that the proposed algorithm (UMCD) outperforms the other two algorithms across the three performance metrics used. Moreover, Figures 46 were plotted using the data from Table 6, all of which demonstrate the efficiency of the proposed algorithm over the compared algorithms. While the performance of the MIDSL algorithm is acceptable, the UMCD algorithm is considered the best choice when considering the overall performance.

    Figure 1.  Performance profile for the number of iterations.
    Figure 2.  Performance profile for the function evaluations.
    Figure 3.  Performance profile for the CPU Time (in seconds).
    Figure 4.  Performance profile of Problems 11 & 12 for the iter.
    Figure 5.  Performance profile of Problems 11 & 12 for the Fval.
    Figure 6.  Performance profile of Problems 11 & 12 for the Time(s).

    Among the applications of the proposed algorithm is signal and image recovery in compressive sensing (CS). Compressed sensing, or compressive sensing, is concerned with the reconstruction of sparse signals from incoherent measurements. It asserts that some signals in real applications have a better representation in a specific domain after transformation, while the remaining ones are insignificant or zero. It replaces the traditional sampling technique, thereby reducing the number of samples for better signal sampling and reconstruction. The technique depends mainly on mathematical algorithms, especially derivative-free-based algorithms that require less storage. Therefore, CG algorithms are a good alternative in this respect. The CS theorem is present in biological engineering, medical sciences, and various fields of science and engineering [43,44]. It uses the following linear equation to recover the sparse signals:

    \begin{equation} \phi = N x, \end{equation} (4.1)

    x \in \mathbb{R}^n, \phi \in \mathbb{R}^m, N \in \mathbb{R}^{mxn} (m < < n) is a linear operator. The popular approach for solving the sparse solution is to minimize the relation below

    \begin{equation} \min\limits_{x}\frac{1}{2}\|\phi-Nx\|_2^2+\tau\|x\|_1, \end{equation} (4.2)

    \tau is a positive parameter (regularized). Equation (4.2) is non-smooth due to the presence of the \ell_1 -norm; therefore, derivative-free algorithms cannot be applied to solve it. As such, gradient projection for sparse reconstruction (GPSR) was proposed by Figueredo et al. [45] is among the best approaches for solving (4.2). In [45], problem (4.2) is written as

    \begin{equation} x = u-v, \quad u\geq0, \quad v\geq0, \quad u, v\in \mathbb{R}^n, \end{equation} (4.3)

    u_i = (x_i)_+ , and v_i = (-x_i)_+ for i = 1, 2, ..., n , where (.)_+ denotes positive-part operator, which is defined as (x)_+ = \max\{0, x \} . Using \ell_1 -norm definition, it implies \|x\|_1 = e_n^Tu+e_n^Tv , with e_n = (1, 1, ..., 1)^T\in \mathbb{R}^n and problem (4.2) become

    \begin{equation} \min\limits_{u, v}\frac{1}{2}\|\phi-N(u-v)\|_2^2+\tau e_n^Tu+\tau e_n^Tv, \quad u, v\geq0. \end{equation} (4.4)

    Figueredo et al., [45], demonstrate how to express problem (4.4) in a more formal bound quadratic programming problem as

    \begin{equation} \min\limits_{z}\frac{1}{2}z^THz+c^Tz, \quad \text{s.t}\quad z\geq0, \end{equation} (4.5)

    where z = \left(\begin{array}{c} u \\ v \\ \end{array} \right) , c = \tau e_{2n}+\left(\begin{array}{c} -b \\ b \\ \end{array} \right) , b = N^T\phi , H = \left(\begin{array}{cc} B^TB & -B^TB \\ -B^TB & B^TB \\ \end{array} \right). The matrix H is a positive semi-definite. Equation (4.5) can be transformed to a linear variable inequality (LVT), as follows:

    \begin{equation} \mathcal{H}(z) = \min\{z, Hz+c\} = 0. \end{equation} (4.6)

    The function \mathcal{H}_k in (4.5) is a vector-valued function; it was shown to be Lipschitz continuous and monotone in Lemma 3 of [46] and Lemma 2.2 of [47]. Hence, our proposed algorithm, the UMCD algorithm, could be a good choice for solving it. Numerical comparison is carried out in comparison with the CHCG algorithm in signal reconstruction and recovery. The size of the signal is considered to be n = 2^{12} with k = 2^{10} . The error, termed as means square error (MSE), is used to assess the quality of the restoration process; it is measured by

    \begin{equation} MSE = \frac{1}{n}\|\hat{x}-\tilde{x}\|^2, \end{equation} (4.7)

    where \hat{x} refers to the original signal, and \tilde{x} is the recovered signal. The experiment starts by initializing x_0 = N^T\phi , conducting ten (10) trials, and recording the data of interest for each trial. It is set to stop when \frac{|f_k-f_{k-1}|}{|f_{k-1}|} < 10^{-5}, where f(x) = \frac{1}{2}\|Nx-\phi\|_2^2+\tau\|x\|_1 is a merit function. The following information is recorded:

    As reported in figures 78 and table 7, both algorithms performed well in recovering the signals with a reasonable degree of accuracy. But considering the number of iterations (ITER) and CPU time (TIMES) obtained by two algorithms; UMCD and CHCG as reported in table 7, the UMCD algorithm outperformed the performance of the CHCG algorithm.

    Figure 7.  From top to bottom, the original signal, the measurement, the recovery signals recovered by UMCD and CHCG algorithms respectively.
    Figure 8.  These figures compares UMCD and CHCG algorithms in signal recovery.
    Table 7.  Ten trials recorded of UMCD and CHCG signal's recovery with \ell_1 -norm regularization.
    UMCD CHCG
    MSE ITER TIME(s) MSE ITER TIME(s)
    2.13E-04 91 2.76 2.30E-05 137 2.06
    8.16E-06 92 2.52 1.96E-05 141 2.06
    8.11E-06 89 2.61 2.38E-05 121 3.55
    2.09E-04 91 2.52 2.38E-05 135 2.98
    4.62E-05 84 2.34 1.62E-05 140 3.09
    7.70E-06 96 2.88 1.85E-05 125 3.77
    2.72E-05 87 2.72 4.31E-05 135 3.16
    1.09E-05 84 2.53 1.94E-05 147 3.17
    0.74E-05 75 2.25 2.10E-04 108 0.41
    1.19E-05 72 2.53 2.71E-05 122 4.53
    AVERAGE 3.62E-05 92.8 2.56 4.24E-05 132 2.87

     | Show Table
    DownLoad: CSV

    This paper presents "An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications". The proposed scheme satisfies the descent condition by conducting an eigenvalue analysis of the search direction matrix. Unlike the methods presented by Yu and Guan [33] and Yu et al. [34], which proved global convergence for the interval C \in (\frac{1}{4}, \infty) only, the proposed method extends the interval to C \in (0, \infty) , and global convergence is proved. Furthermore, the numerical performance of the proposed algorithm, in comparison with MDDYM (2021) [37], DTCG1 [6], ZANG [38], and MCHCG [39] algorithms, confirms the efficiency of the proposed algorithm. Being derivative-free, the proposed algorithm is applied to compressive sensing for signal recovery problems. To measure the capacity of the proposed algorithm for signal recovery problems, the experiment is run together with the CHCG algorithm, a well-known algorithm for signal recovery. The results of the experiment are recorded and indicate the efficiency of the proposed algorithm.

    H. A. conceptualized the study, developed the methodology, conducted theoretical analysis, and drafted the initial manuscript. A. K. A. and M. Y. W. supervised the research, validated the results, refined the proofs, and critically reviewed the mathematical framework. A. S. H. and K. A. refinement of proofs, implemented the algorithm, conducted computational experiments, and analyzed the data. I. A. R. M. literature review, manuscript draft, and funding. S. M. I. revised the manuscript, result interpretation, and manuscript editing. Y. B. M. provided insights into signal recovery applications and refined the results. E. M. N. verified computational accuracy, assisted with proofreading, formatting, and ensuring compliance with journal requirements. All authors read and approved the final manuscript.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to Research Management Centre (RMC), Universiti Utara Malaysia through the geran KHAS with SO code 21777 and the Information Systems and Technology Department of the Kuwait Technical College, Kuwait, for their financial support.

    This research is sponsored by the Institutional Based Research (IBR) of the Tertiary Education Trust Fund (TETFund), Sule Lamido University Kafin Hausa, Nigeria.

    The authors declare there is no conflict of interests.



    [1] P. T. Bateman, E. Grosswald, On a theorem of Erdős and Szekeres, Illinois J. Math., 2 (1958), 88–98. https://doi.org/10.1215/ijm/1255380836 doi: 10.1215/ijm/1255380836
    [2] M. Baake, R. V. Moody, P. Pleasants, Diffraction from visible lattice points and k-th power free integers, Discrete Math., 221 (2000), 3–42. https://doi.org/10.1016/S0012-365X(99)00384-2 doi: 10.1016/S0012-365X(99)00384-2
    [3] J. Cilleruelo, J. L. Fernández, P. Fernández, Visible lattice points in random walks, Eur. J. Combin., 75 (2019), 92–112. https://doi.org/10.1016/j.ejc.2018.08.004 doi: 10.1016/j.ejc.2018.08.004
    [4] R. Durrett, Probability. Theory and Examples, fourth ed., Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, 2010.
    [5] P. Erdős, G. Szekeres, Über die Anzahl der Abelschen Gruppen gegebener Ordnung und über ein verwandtes zahlentheoretisches Problem, Acta Sci. Math., (Szeged), 7 (1935), 95–102.
    [6] C. Huck, M. Baake, Dynamical properties of k-free lattice points, Acta Phys. Pol. A, 126 (2014), 482–485. https://doi.org/10.12693/APhysPolA.126.482 doi: 10.12693/APhysPolA.126.482
    [7] H. Iwaniec, E. Kowalski, Analytic Number Theory, vol. 53. Colloquium Publications, American Mathematical Society, Providence, 2004. https://doi.org/10.1090/coll/053
    [8] E. Kr\mathop {\rm{a}}\limits^{''}tzel, Zahlen k-ter Art, Am. J. Math., 44 (1972), 309–328. https://doi.org/10.2307/2373607 doi: 10.2307/2373607
    [9] E. Kr\mathop {\rm{a}}\limits^{''}tzel, Divisor problems and powerful numbers, Math. Nachr., 114 (1983), 97–104. https://doi.org/10.1002/mana.19831140107 doi: 10.1002/mana.19831140107
    [10] P. Pleasants, C. Huck, Entropy and diffraction of the k-free points in n-dimensional lattices, Discrete Comput. Geom., 50 (2013), 39–68. https://doi.org/10.1007/s00454-013-9516-y doi: 10.1007/s00454-013-9516-y
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1788) PDF downloads(61) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog