Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Weak convergence of the conditional single index U-statistics for locally stationary functional time series

  • In recent years, there has been a notable shift in focus towards the analysis of non-stationary time series, driven largely by the complexities associated with delineating significant asymptotic behaviors inherent to such processes. The genesis of the theory of locally stationary processes arises from the quest for asymptotic inference grounded in nonparametric statistics. This paper endeavors to formulate a comprehensive framework for conducting inference within the realm of locally stationary functional time series by harnessing the conditional U-statistics methodology as propounded by W. Stute in 1991. The proposed methodology extends the Nadaraya-Watson regression function estimations. Within this context, a novel estimator was introduced for the single index conditional U-statistics operator, adept at accommodating the non-stationary attributes inherent to the data-generating process. The primary objective of this paper was to establish the weak convergence of conditional U-processes within the domain of locally stationary functional mixing data. Specifically, the investigation delved into scenarios of weak convergence involving functional explanatory variables, considering both bounded and unbounded sets of functions while adhering to specific moment requirements. The derived findings emanate from broad structural specifications applicable to the class of functions and models under scrutiny. The theoretical insights expounded in this study constitute pivotal tools for advancing the domain of functional data analysis.

    Citation: Salim Bouzebda. Weak convergence of the conditional single index U-statistics for locally stationary functional time series[J]. AIMS Mathematics, 2024, 9(6): 14807-14898. doi: 10.3934/math.2024720

    Related Papers:

    [1] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [2] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [3] Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718
    [4] Aliyu Muhammed Awwal, Poom Kumam, Kanokwan Sitthithakerngkiet, Abubakar Muhammad Bakoji, Abubakar S. Halilu, Ibrahim M. Sulaiman . Derivative-free method based on DFP updating formula for solving convex constrained nonlinear monotone equations and application. AIMS Mathematics, 2021, 6(8): 8792-8814. doi: 10.3934/math.2021510
    [5] Jamilu Sabi'u, Ali Althobaiti, Saad Althobaiti, Soubhagya Kumar Sahoo, Thongchai Botmart . A scaled Polak-Ribiˊere-Polyak conjugate gradient algorithm for constrained nonlinear systems and motion control. AIMS Mathematics, 2023, 8(2): 4843-4861. doi: 10.3934/math.2023241
    [6] Xiaowei Fang . A derivative-free RMIL conjugate gradient method for constrained nonlinear systems of monotone equations. AIMS Mathematics, 2025, 10(5): 11656-11675. doi: 10.3934/math.2025528
    [7] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [8] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [9] Wei Xue, Pengcheng Wan, Qiao Li, Ping Zhong, Gaohang Yu, Tao Tao . An online conjugate gradient algorithm for large-scale data analysis in machine learning. AIMS Mathematics, 2021, 6(2): 1515-1537. doi: 10.3934/math.2021092
    [10] Jie Guo, Zhong Wan . A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
  • In recent years, there has been a notable shift in focus towards the analysis of non-stationary time series, driven largely by the complexities associated with delineating significant asymptotic behaviors inherent to such processes. The genesis of the theory of locally stationary processes arises from the quest for asymptotic inference grounded in nonparametric statistics. This paper endeavors to formulate a comprehensive framework for conducting inference within the realm of locally stationary functional time series by harnessing the conditional U-statistics methodology as propounded by W. Stute in 1991. The proposed methodology extends the Nadaraya-Watson regression function estimations. Within this context, a novel estimator was introduced for the single index conditional U-statistics operator, adept at accommodating the non-stationary attributes inherent to the data-generating process. The primary objective of this paper was to establish the weak convergence of conditional U-processes within the domain of locally stationary functional mixing data. Specifically, the investigation delved into scenarios of weak convergence involving functional explanatory variables, considering both bounded and unbounded sets of functions while adhering to specific moment requirements. The derived findings emanate from broad structural specifications applicable to the class of functions and models under scrutiny. The theoretical insights expounded in this study constitute pivotal tools for advancing the domain of functional data analysis.



    Consider the following system of equations:

    H(x)=0,xψ, (1.1)

    where H:RnRn is a continuous and monotone mapping and ψRn is a non-empty closed and convex set. The monotonicity of H implies that it satisfies the following inequality:

    (H(x)H(y))T(xy)0x,yRn. (1.2)

    In this paper, Rn and . refer to the real n-dimensional space and the Euclidean norm, respectively, while H(xk)=Hk.

    The system described above arises in various fields of science and engineering, as many scientific formulations naturally lead to systems of nonlinear equations. These systems appear in mathematical research areas such as differential equations, optimization theory, and integral equations. They also serve as subproblems in generalized proximal algorithms [1].

    In chemistry, such systems frequently arise in the determination of chemical equilibrium points, while in economics, equilibrium analysis is often conducted in a similar manner [2]. Some variational inequalities are reformulated as monotone systems before being solved [3]. Additionally, these systems have applications in compressed sensing, particularly in signal reconstruction and image de-blurring problems[2,4,5,6].

    These systems also play a crucial role in solving optimal power flow equations [7]. In radiative transfer processes, they are applied through the discretization of the Chandrasekhar integral equation [2]. Furthermore, they are utilized in financial institutions for forecasting and automation [8].

    Numerous methods and techniques have been developed, analyzed, and refined to solve system (1.1). Among these, the most widely used is Newton's method and its improved variants [9]. Newton's and quasi-Newton techniques are commonly employed for solving constrained monotone systems of nonlinear equations [9]. However, despite their popularity, the computational cost and storage requirements associated with the Jacobian matrix (the first derivative of the system) make Newton's method impractical for large-scale systems.

    Quasi-Newton methods, in contrast, approximate the Jacobian matrix rather than computing it explicitly [10,11]. While these methods eliminate the need for exact derivative computations, they require substantial memory for matrix storage and are slower than Newton's method due to limited Jacobian information [10,11]. The steepest descent is among the popular descent methods for solving system (1.1). Although previously explored, it has largely fallen out of favor due to its slow convergence rate [12].

    Given the challenges posed by large-scale systems, researchers have increasingly focused on conjugate gradient (CG) methods. As derivative-free approaches, CG methods are particularly well-suited for solving large-scale systems [13].

    The CG methods are iterative schemes that generate a sequence of iterative points via

    xk+1=xk+αkdk,k=0,1,2,, (1.3)

    where αk is the step length, and dk is the search direction [14]. For CG methods, the search direction is defined as:

    dk={Hk,if k=0,Hk+βksk1,if k1, (1.4)

    where sk1=xkxk1, and βk is the CG parameter.

    The CG parameter βk takes different forms depending on its formulation. For example,

    βCDk=Hk2HTk1sk1, (1.5)

    which is known as the conjugate descent (CD) CG parameter [15].

    The parameter βk is considered the most critical component of any CG direction, as different parameters lead to distinct CG directions with unique convergence properties and numerical performance. While some CG parameters are computationally efficient, others may fail to satisfy the inequality below, which is an essential condition for global convergence [2]:

    HTkdkτHk2,τ>0. (1.6)

    The CD parameter βCDk belongs to the class of classical CG methods, exhibiting good convergence properties but weak numerical performance. To address this limitation, Yang et al. [16] hybridized the Liu-Storey (LS) and CD parameters using Wolfe's line search strategy, resulting in an efficient algorithm. Similarly, Kaelo et al. [17] hybridized the LS, Hestenes-Stiefel (HS), and Dai-Yuan (DY) CG parameters, producing an algorithm that satisfies (1.6). Using the strong Wolfe's line search strategy, the proposed algorithm achieved global convergence.

    Snezana and Djordjevic [18] proposed a new hybrid of the CD and LS methods, ensuring that the hybrid parameter satisfies (1.6). Liu et al. [19] presented a scheme that also satisfies (1.6), introducing an LS-type CG parameter for solving the system (1.1) by applying Powell's strategy to the LS-type scheme [20]. Motivated by the method in [21], Wang et al. [22] developed a three-term CG algorithm for solving (1.1). Similarly, Li Wang [23], building on the FR-type CG iterative scheme, proposed an algorithm for symmetric systems by improving the method of Zhang et al. [24]. This method satisfies (1.6) and converges globally.

    Liu and Li [25], inspired by the work of Yu et al. [26], presented a multivariate spectral algorithm for solving systems of the form (1.1). Acknowledging the weak numerical performance of the classical CD method, Zhang et al. [27] proposed a three-term CG direction satisfying (1.6) and the trust-region property. The proposed method achieved convergence through a modified Weak Wolfe-Powell (MWWP) line search strategy coupled with a projection algorithm, with numerical results outperforming many existing algorithms.

    Recent studies have proposed various CG-based approaches to tackle constrained and unconstrained nonlinear equations. Ibrahim et al. [28] introduced two classes of spectral three-term derivative-free methods that effectively solve nonlinear equations. However, despite their efficiency, these methods struggle with ill-conditioned problems, leading to slow convergence. Similarly, Sabi'u and Sirisubtawee [29] proposed an inertial Dai-Liao conjugate gradient method tailored for convex constrained monotone equations, ensuring better stability and robustness. Nonetheless, inertial methods often require careful parameter tuning to prevent instability.

    Additionally, Zheng et al. [30] developed a conjugate gradient projection method that efficiently handles equations with convex constraints. However, projection-based methods can suffer from increased computational cost, especially in high-dimensional problems [31]. Another notable contribution is by Chankong et al. [32], who formulated a class of derivative-free three-term descent Hestenes-Stiefel conjugate gradient algorithms for constrained nonlinear problems. While these algorithms demonstrate promising performance, they often encounter difficulties in non-smooth or highly nonlinear problems where descent conditions are hard to maintain. These recent advancements accentuate the increasing focus on improving CG-based methods by incorporating new techniques, such as inertial acceleration, spectral properties, and projection methods, to enhance convergence and stability.

    Originally, classical CG methods were designed for solving systems of the form:

    minf(x),xRn, (1.7)

    where f:RnR is a nonlinear and continuously differentiable function, and its second derivative exists. Given their desirable properties and the fact that the first-order optimality condition for (1.7), i.e., f(x)=0, is equivalent to (1.1) whenever f=Hk, all methods applied to (1.7) can be extended to tackle problems of the form (1.1).

    To modify an existing conjugate gradient (CG) parameter or produce a new successful one, two factors must be considered:

    ● The new modified CG parameter must satisfy (1.6), which will ensure global convergence.

    ● It must outperform existing methods computationally by carrying out numerical comparisons with the relevant methods in the literature.

    In [33], Yu and Guan presented a modified version of the DY CG parameter as follows:

    βDDYk=βDYkbHk2(dTk1yk1)2dTk1Hk. (2.1)

    In their effort to prove the convergence of the method, they showed that (1.6) is satisfied for τ=114b, where b14.

    Following the same argument, a modified CD parameter was proposed in [33] and is presented as:

    βDCDk=βCDkbHk2(HTkdk1)2HTkdk1, (2.2)

    and the sufficient descent inequality was proved for b>14.

    Inspired by Yu and Guan's schemes, Yu et al. presented the spectral version of Eqs (2.1) and (2.2) in [34], with CG parameters given by:

    βDSDYk=βSDYkbHk2δk(dTk1yk1)2dTk1Hk, (2.3)

    and

    βDSCDk=βSCDkbHk2δk(HTkdTk1yk1)2HTkdk1, (2.4)

    where b>O, but δk was left as a topic for further research.

    The limitation of these four schemes (2.1)–(2.4) is that they do not satisfy the sufficient descent inequality (1.6) for the interval (0,14], and consequently, they cannot attain global convergence for b(0,14]. Furthermore, the value of b>0 was fixed throughout.

    Motivated by the above and the fact that, to the best of our knowledge, modified CD schemes are limited in the literature, we aim to fill this gap by proposing a scheme that satisfies (1.6) and achieves global convergence for the interval bk(0,), where the value of bk>0 is not fixed.

    Consider the following:

    βMCDk=ξHk2HTk1sk1ξbkHk2(HTk1sk1)2HTksk1, (2.5)

    ξ>0 and bk>0 is a parameter to be determined.

    By (1.4) and (2.5), we have

    dk=HkξHk2HTk1sk1sk1ξbkHk2HTksk1(HTk1sk1)2sk1. (2.6)

    By Perry's idea in [35], (2.6) can be written as

    dk=QkHk. (2.7)

    Here, Qk is called the search direction matrix, which is expressed as

    Qk=I+ξsk1HTkHTk1sk1+ξbkHk2sk1sTk1(HTk1sk1)2. (2.8)

    Qk is non-symmetric, it shall be symmetrized as follows:

    Ak=I+ξsk1HTk+HksTk1HTk1sk1+ξbkHk2sk1sTk1(HTk1sk1)2. (2.9)

    Then (2.7) becomes

    dk=AkHk, (2.10)

    where Ak is the symmetric version of Qk in (2.7).

    Theorem 2.1. The matrix Ak defined in (2.9) has eigenvalues λ+k, λk, and 1 with multiplicity (n2), where

    λ+k=12[(2+2ξpk+ξbkqk)+(ξbkqk+2ξpk)2+4ξ2(qkp2k)] (2.11)

    and

    λk=12[(2+2ξpk+ξbkqk)(ξbkqk+2ξpk)2+4ξ2(qkp2k)], (2.12)

    with

    pk=HTksk1HTk1sk1,qk=Hk2sk12(HTk1sk1)2. (2.13)

    Moreover, the eigenvalues are real and positive.

    Proof. The objective function Hk is not zero unless the solution is reached, as is the vector sk1. Suppose the solution is not reached; then, both Hk and sk1 are nonzero vectors, and hence, HTksk1>0. Let V be the vector space spanned by {Hk,sk1}. The dimensions of V and its orthogonal complement V are dim(V)2 and dim(V)n2, respectively.

    Let a set of mutually orthogonal vectors {ιik1}n2iV satisfy

    HTkιik1=sTk1ιik1fori=1,,n2. (2.14)

    Using (2.9) and (2.14), we obtain

    Akιik1=ιik1. (2.15)

    The above equation is an eigenvector equation, and ιik1, for i=1,2,,n2, is an eigenvector with a corresponding eigenvalue of 1, repeated n2 times. Let the remaining eigenvalues be λ+k and λk. Furthermore, (2.9) can also be written as

    Ak=I+ξsk1HTkHTk1sk1+ξ(bkHk2sk1+HTksk1Hk(HTk1sk1)2)sTk1. (2.16)

    It is clear that (2.16) is a rank-2 update matrix therefore; from Theorem 1.2.16 of [36], the relation

    det(I+a1aT2+a3aT4)=(1+aT1a2)(1+aT3a4)(aT1a4)(aT2a4),

    with a1=ξsk1HTk1sk1, a2=Hk, a3=ξbkHk2sk1+HTk1sk1Hk(HTK1sk1)2 and a4=sk1 can be used to get the determinant matrix of Ak as

    det(Ak)=(ξ(sTk1Hk)(HTk1sk1)+1)2+ξbkHk2sk12(HTK1sk1)2ξ2Hk2sk12(HTk1sk1)2. (2.17)

    Furthermore, it is known that the trace of (2.9) is equivalent to the total number of its eigenvalues; hence we have

    trace(Ak)=1+1++1(n2)times+λ+k+λk=n+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2, (2.18)

    and consequently,

    λ+k+λk=2+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2. (2.19)

    Then by (2.13), (2.17), and (2.19), we obtain

    λ2k(2+2ξpk+ξbkqk)λk+((ξpk+1)2+ξ(bkξ)qk)=0. (2.20)

    Then, using the quadratic formula, we obtain

    λ±k=12[(2+2ξpk+ξbkqk)±(ξbkqk+2ξpk)2+4ξ2(qkp2k)]. (2.21)

    This gives (2.11) and (2.12). Next, we show that λ+k and λk are real and positive. The discriminant is positive since qk>p2k. Therefore, we need to have

    2+2ξHTksk1HTk1sk1+ξbkHk2sk12(HTk1sk1)2>0. (2.22)

    λ+k is real and positive when

    bk>2(HTksk1)(HTk1sk1)Hk2sk122(HTk1sk1)2ξHk2sk12. (2.23)

    For λk to be real and positive, we require

    λk=12[(2+2ξpk+ξbkqk)(ξbkqk+2ξpk)2+4ξ2(qkp2k)]>0. (2.24)

    After algebraic simplification, we obtain

    bk>ξ(ξ(HTksk1)Hksk1+HTk1sk1ξHksk1)2. (2.25)

    Now, (2.25) can be written as

    bk=ξφ(ξ(HTksk1)Hksk1+HTk1sk1ξHksk1)2. (2.26)

    with ξ>0 and φ0.

    Here, we present the proposed direction.

    dk={Hk,ifk=0,HkξHk2HTk1sk1sk1ξbkHk2(HTk1sk1)(HTk1sk1)2sk1,ifHTksk1>0andHTk1sk1rHksk1,r>1,Hk+Hk2max{HTk1sk1,γHk1sk1}sk1,γ(0,1),otherwise. (2.27)

    To improve its convergence attributes, bk in (2.26) can be modified to

    bk=ξφ(ξ(HTksk1)Uk+HTk1sk1Vk)2,ξ>0,φ0, (2.28)
    Uk=max{Hksk1,ξHk1sk1},Vk=max{Hk1sk1,ξHksk1}.

    Let the projection mapping be defined as follows before presenting the proposed algorithm.

    Definition 2.1. Let ψRn be a not empty, convex, and closed set. For xRn, the projection of x onto ψ is given by

    Pψ=argmin{xy:yψ}. (2.29)

    Equation (2.29) is called a projection mapping. The projection, Pψ, is nonexpensive, namely,

    Pψ(x)Pψ(y)xy,x,yψ. (2.30)

    and consequently,

    Pψ(x)yxy,yψ. (2.31)

       Input: Input: Given x0ψ, d0=H0, ρ(0,1), σ>0, ϵ=106, and ζ>0, set k=0.
       Step ⅰ: find Hk. If Hkϵ stop, otherwise go to Step ⅱ.
       Step ⅱ: Let αk=ζρmk, where mk is the least positive integer m such that
    H(xk+αkdk)TdkσαkH(xk+αkdk)dk2.(2.32)
          Step ⅲ: Set
    zk=xk+αkdk.(2.33)
          Step ⅳ: If zkψ and H(zk)ϵ, stop else, go to Step ⅴ.
       Step ⅴ: Solve for xk+1 by
    xk+1=Pψ[xkλkH(zk)],(2.34)
          where, λk=(xkzk)TH(zk)H(zk)2.
       Step ⅵ: Get the new dk+1 using (2.27) and (2.28).
       Step ⅶ: Update k=k+1 and repeat Step ⅱ.
       An improved CD Iterative Scheme

    This part presents the analysis of the proposed algorithm UMCD and some certain preassumptions.

    Assumption 3.1. The map H is Lipschitz continuous, i.e., there is a constant L>0 such that

    H(x)H(y)Lxyholdsfor anyx,yRn. (3.1)

    Assumption 3.2. H is strongly monotone, i.e., x,yRn, then, there exists c>0 such that

    (xy)T(H(x)H(y))cxy2,forallx,yRn. (3.2)

    Assumption 3.3. Let a nonempty set ˉS, be the solution set of the system (1.1)

    Lemma 3.1. Let ξ>0,bk>0 and φ0, then k0 the sequence of search directions {dk1} presented by (2.27) with (2.28) satisfies (1.6) where τ=1.

    Proof. From (2.27), it is clear that for k=0, HT0d0=H02. This satisfies (1.6) with τ=1,

    Case Ⅰ. For k>1, If HTksk1>0 and HTk1sk1rHksk1, dTkHk=Hk2ξHk2(HTksk1)HTk1sk1ξbkHk2(HTksTk1)2(HTk1sk1)2Hk2.

    Case Ⅱ. If HTksk1=0, then dTkHk=Hk2+Hk2(HTksk1)max{HTk1sk1,γHksk1} Hk2+Hk2(HTksk1)γHksk1 - Hk2.

    Case Ⅲ. If HTksk1<0, then dTkHk=Hk2+Hk2(HTksk1)max{HTk1sk1,γHksk1} Hk2+Hk2(HTksk1)γHksk1 - Hk2. where (Hk2(HTksk1)γHksk1<0), and hence the required result.

    Lemma 3.2. Let the assumptions hold and {xk},{zk} be sequences defined by the UMCD algorithm, then {xk},{zk} are bounded and

    dkM. (3.3)
    limkxkzk=0. (3.4)
    limkxk+1xk=0. (3.5)

    Proof. Let ˉxˉS be the solution set of the system given by (1.1); then by the monotonicity of H, we obtain

    (xkˉx)TH(zk)(xkzk)TH(zk). (3.6)

    Then, using Eq (2.32) (the line search condition) and the definition of zk, it follows

    (xkzk)TH(zk)σα2kdk2>0. (3.7)

    Also, from (2.34), we have

    xk+1ˉx2=xkλkH(zk)ˉx2=xkˉx22λk(xkˉxk)TH(zk)+λ2kH(zk)2xkˉx22λk(xkzk)TH(zk)+λ2kH(zk)2=xkˉx2((xkzk)TH(zk))2HT(zk)2xkˉx2. (3.8)

    Hence,

    xk+1ˉxxkˉx. (3.9)

    Repeatedly, xkˉxx0ˉx k. This means that {xkˉx} is decreasing and {xk} is bounded. Furthermore, by (1)–(3) and (3.9), we get

    Hk=HkHk(ˉx)LxkˉxLx0ˉx. (3.10)

    Hence

    HkN1forN1=Lx0ˉx. (3.11)

    Then from (2.28)

    |bk|=|ξφ(ξ(HTksk1)Uk+(HTk1sk1)Vk)2||ξ|+|φ||(ξHksk1Uk+Hk1sk1Vk)2|ξ+|φ||(ξHksk1Hksk1+Hk1sk1Hk1sk1)2|=ξ+|φ|(ξ+1)2. (3.12)

    This implies

    |bk|λforλ=ξ+|φ|(ξ+1)2. (3.13)

    From (2.27) and using (2.28), (3.11), and (3.13), we have

    If, HTksk1>0 and HTk1sk1rHksk1,

    dk=Hk(ξHk2HTk1sk1bkHk2(HTksk1)(HTk1sk1)2)sk1Hk+ξHk2HTk1sk1+bkHk3sk1(HTk1sk1)2sk1Hk+ξHk2sk1rHksk1+bkHk3sk12r2Hk2sk12Hk+ξHkr+λHkr2N1+ξN1r+λN1r2. (3.14)

    Let M1=N1+ξN1r+λN1r2.

    Otherwise,

    dk=Hk+Hk2max{HTk1sk1,γHksk1}sk1Hk+Hk2sk1γHksk1N1+λN1γ. (3.15)

    Let M2=N1+N1γ and take M=max{M1,M2}, then the Lemma (3.2) is proved.

    The sequences {dk} and {xk} are bounded; Eq (2.33) shows that {zk} is bounded. Using the same argument as in (3.10), it implies that

    H(zk),is a positive constant. (3.16)

    Equation (3.8) shows that,

    ((xkzk)TH(zk))2Hk2(xkˉx2xk+1ˉx2). (3.17)

    From (2.32), we obtain

    σ2α4kdk4α2k(H(zk)Tdk)2. (3.18)

    By (3.8) and (3.18), we have

    σ2α4kdk4H(zk)4(xkˉx2xx+1ˉx2), (3.19)

    {xkˉx} converged and {H(zk)} is bounded; we can apply the limit on (3.17) to obtain

    σ2limkα4kdk40, (3.20)

    and hence

    limkαkdk=0, (3.21)

    (3.19) and (2.33) indicates that (3.4) is true. However, it is deduced from the definition of λk that

    xk+1xk=xkλkH(zk)xk=λkH(zk)xkzk. (3.22)

    This means Eq (3.5) is proved.

    Theorem 3.1. Let the assumptions (3.1)(3.3), be true and the sequence {xk} be defined by the UMCD algorithm; then

    lim infkHk=0. (3.23)

    Proof. By contradicting the above assumptions, i.e., suppose (3.23) is not true, then

    Hkϵ0is truek>0. (3.24)

    Let dk0 unless when Hk=0, then there exists δ1>0 such that

    dkδ1. (3.25)

    If αkξ, then αk, ρ1αk does not agrees (2.32), i.e.,

    H(xk+ρ1αkdk)Tdk<σρ1αkH(xk+ρ1αkdk)dk2. (3.26)

    Using (3.26) and (1.6), it yields

    τHk2HTkdk=(H(xk+ρ1αkdk)Hk)TdkHk(xk+ρ1αkdk)Tdk,Lρ1αkdk2+σρ1αkdk2=αkdk(L+σ)ρ1dk. (3.27)

    And from (3.27) we obtain

    αkdk>ρL+στHk2dkρL+στϵ20M. (3.28)

    Equation (3.28) contradicts Eq (3.21). Hence, (3.23) is true. This completes the proof.

    This section is divided into two parts. In this part, the numerical results of the proposed algorithm are presented. To measure the strength of the proposed algorithm (UMCD), the following algorithms were used:

    ● MDDYM (2021) algorithm proposed in [37].

    ● DTCG1 (2020) algorithm proposed in [6].

    ● ZANG (2023) algorithm proposed in [38].

    ● MCHCG (2024) algorithm proposed in [39].

    In the second part of this study, we extend the UMCD algorithm to explore its effectiveness in compressive sensing experiments. To rigorously assess the performance of our proposed algorithm, we conduct a signal recovery experiment in conjunction with the CHCG algorithm [2]. The implementation is executed using MATLAB version 8.3.0 (R2014a) on a laptop equipped with a 1.80 GHz processor (CPU) and 8 GB of RAM.

    For the UMCD algorithm, we selected the following key parameters: ξ=1, σ=φ=104, and ρ=ζ=0.9. In comparison, the parameters for the other algorithms are provided in their respective documents. This structured approach allows us to draw meaningful comparisons and insights regarding the performance of the proposed algorithm.

    The iterations of all five algorithms were configured to stop when either |Hk|106, |H(zk)|106 are reached, or the number of iterations exceeds 2000 without meeting the stopping criterion. Moreover, the symbol '–' indicates a failure during the iterative process.

    The numerical experiments for the five algorithms were carried out using eight (8) different initial points and twelve (12) test problems (T1–T12).

    x1=(0.01,0.01,...,0.01)T, x2=(0.25,0.25,...,0.25)T, x3=(0.4,0.4,...,0.4)T, x4=(0.5,0.5,...,0.5)T, x5=(1.25,1.25,...,1.25)T, x6=(0.3,0.3,...,0.3)T, x7=(1,1,...,1)T, and x8=(0.1,0.1,...,0.1)T.

    T1 [6]

    H1(x)=ex11,

    Hi(x)=exi+xi11,i=2,3,4,,n1,

    where ψ=Rn+.

    T2 [5]

    Hi(x)=log(xi+1)xin,i=2,3,,n1,

    where ψ={xRn:ni=1xin,xi>1,i=1,2,3,,n}.

    T3 [37]

    Hi(x)=2xisin|xi|, i=1,2,3,,n,

    where ψ=Rn+.

    T4 [5]

    Hi(x)=cosxi+xi1. i=1,2,3,,n,

    where ψ=Rn+.

    T5 [6]

    Hi(x)=exi1,i=1,2,3,,n,

    where ψ=Rn+.

    T6 [37]

    H1(x)=2x1x2+ex11,

    Hi(x)=xi1+2xixi+1+exi1,

    Hn(x)=xn1+2xn+exn1, i=2,3,4,,n1,

    where ψ={xRn:ni=1xin,xi0,i=1,2,3,,n}.

    T7 [6]

    H1(x)=x1ecos(b(x1+x2)),

    Hi(x)=xiecos(b(xi1+xi+xi+1)), i=2,3,4,,n1,

    Hn(x)=xnecos(b(xn1+xn)),

    b=1n+1, where ψ=Rn+.

    T8 [37]

    Hi(x)=xisin|xi1|, i=1,2,,n,

    where ψ={xRn:ni=1xin,xi>1,i=1,2,3,,n}.

    T9 [40]

    Hi(x)=ex2i+32sin(2xi)1,i=1,2,3,,n,

    where ψ=Rn+.

    T10

    H1(x)=cosx19+3x1+8ex2,

    Hi(x)=cosxi9+3xi+8exi1, i=2,3,4,,n,

    where ψ=Rn+.

    T11 Modification of T1 in [41]

    H1(x)=esinx11,

    Hi(x)=esinxi+xi11,i=2,3,4,,n1,

    where ψ=Rn+.

    T12 Modification of T3 in [41]

    Hi(x)=3xisinxi, i=1,2,3,,n,

    where ψ=Rn+.

    Tables 15 display the performance of the proposed algorithm (UMCD) in comparison with recent CG algorithms: MDDYM [37] and DTCG1 [6]. In the tables, the terms *IP*, *Iter*, and *Fval* refer to the initial point, number of iterations, and function evaluations, respectively. Additionally, *Time(s)* and Hk represent CPU time and the norm of residual functions, respectively.

    Table 1.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 1 and 2.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P1 100 x1 15 16 0.03303 0.00E+00 15 47 0.05952 8.76E-07 1001 1050 0.29023 6.55E+04
    x2 4 5 0.00783 0.00E+00 12 37 0.01407 5.31E-07 1001 1116 0.32494 6.55E+04
    x3 4 5 0.01144 0.00E+00 14 44 0.01315 2.00E-07 37 181 0.05921 6.96E-07
    x4 2 3 0.00478 0.00E+00 14 46 0.02308 7.27E-07 18 133 0.04298 0.00E+00
    x5 4 5 0.01512 0.00E+00 16 51 0.01617 7.30E-07 29 321 0.04569 0.00E+00
    x6 4 5 0.01100 0.00E+00 13 43 0.01019 4.74E-10 29 318 0.06099 0.00E+00
    x7 4 5 0.01278 0.00E+00 11 33 0.02672 3.35E-08 1001 1028 0.29032 6.55E+04
    x8 13 14 0.02215 0.00E+00 18 58 0.01440 6.38E-07 8 46 0.01104 0.00E+00
    10,000 x1 4 5 0.05099 0.00E+00 14 43 0.12490 8.65E-07 103 1155 1.27990 0.00E+00
    x2 18 19 0.21214 0.00E+00 13 42 0.06798 1.72E-07 32 284 0.30602 0.00E+00
    x3 4 5 0.07857 0.00E+00 6 15 0.03300 4.90E-07 16 122 0.25066 0.00E+00
    x4 4 5 0.04852 0.00E+00 6 15 0.03077 1.65E-07 92 1015 1.71821 0.00E+00
    x5 9 10 0.11444 0.00E+00 11 33 0.06174 2.99E-07 25 45 0.15900 9.34E-07
    x6 5 6 0.06893 0.00E+00 10 28 0.05002 2.13E-07 92 1030 1.19462 0.00E+00
    x7 14 15 0.17102 0.00E+00 8 23 0.06200 8.54E-07 9 60 0.13710 0.00E+00
    x8 14 15 0.16226 0.00E+00 17 61 0.11029 9.75E-07 1001 1082 2.36323 6.55E+04
    100,000 x1 6 7 0.50356 0.00E+00 16 46 0.63699 4.68E-07 30 199 1.74338 0.00E+00
    x2 19 20 1.70708 0.00E+00 10 27 0.36805 1.28E-08 28 286 2.20160 0.00E+00
    x3 6 7 0.51893 0.00E+00 7 17 0.29252 1.13E-07 101 1163 8.55412 0.00E+00
    x4 4 5 0.34368 0.00E+00 6 15 0.26364 9.96E-09 26 43 0.63220 8.31E-07
    x5 11 12 0.97077 0.00E+00 10 31 0.41796 5.01E-07 29 72 0.86051 7.39E-07
    x6 11 12 1.01489 0.00E+00 8 21 0.30025 4.92E-08 27 68 0.82920 5.14E-07
    x7 18 19 1.65894 0.00E+00 8 23 0.41963 9.48E-08 29 71 0.84210 5.86E-07
    x8 15 16 1.34973 0.00E+00 11 34 0.42575 7.74E-07 9 58 0.52940 0.00E+00
    P2 100 x1 2 4 0.00584 9.83E-07 5 11 0.01489 8.95E-08 18 20 0.01423 6.20E-07
    x2 4 6 0.00556 3.31E-07 7 15 0.01311 1.74E-07 23 25 0.01436 6.10E-07
    x3 5 7 0.00835 4.29E-07 6 12 0.00810 9.99E-07 24 26 0.01319 5.57E-07
    x4 5 7 0.00640 9.76E-07 14 38 0.04578 9.56E-07 24 26 0.04415 7.58E-07
    x5 6 8 0.00936 1.29E-07 9 18 0.02229 1.10E-07 26 28 0.01594 8.44E-07
    x6 6 8 0.01050 9.05E-07 10 26 0.00969 8.72E-07 23 25 0.01364 7.66E-07
    x7 5 7 0.00931 7.25E-07 14 30 0.03262 7.75E-07 26 28 0.03901 5.62E-07
    x8 4 6 0.00662 9.39E-07 9 25 0.01071 5.40E-08 21 23 0.02760 8.49E-07
    10,000 x1 3 5 0.02577 2.11E-07 5 11 0.03185 2.71E-07 19 21 0.08529 9.65E-07
    x2 4 6 0.02683 9.71E-07 7 15 0.04860 5.14E-07 24 26 0.11387 9.45E-07
    x3 6 8 0.03915 2.67E-08 7 13 0.03646 8.67E-08 25 27 0.09513 8.62E-07
    x4 6 8 0.03951 1.19E-07 14 38 0.08905 1.68E-07 26 28 0.10267 5.86E-07
    x5 6 8 0.04087 3.76E-07 9 18 0.07042 4.65E-07 28 30 0.18724 6.52E-07
    x6 7 9 0.05484 2.58E-07 11 27 0.07024 7.88E-08 25 27 0.14486 5.93E-07
    x7 6 8 0.03782 1.59E-07 15 32 0.08824 3.92E-09 27 29 0.11938 8.68E-07
    x8 5 7 0.03146 1.62E-07 12 32 0.07698 5.56E-08 23 25 0.12050 6.59E-07
    100,000 x1 3 5 0.15491 6.43E-07 5 11 0.21010 8.52E-07 21 23 0.53615 7.62E-07
    x2 5 7 0.24336 1.95E-08 8 16 0.35311 2.72E-08 26 28 0.66623 7.46E-07
    x3 6 8 0.25928 8.34E-08 7 13 0.32370 2.70E-07 27 29 0.69745 6.80E-07
    x4 6 8 0.26963 3.73E-07 17 45 0.72980 5.08E-08 27 29 0.72102 9.25E-07
    x5 7 9 0.29765 3.60E-08 10 19 0.37575 3.93E-08 30 32 0.78366 5.14E-07
    x6 7 9 0.35906 8.12E-07 11 27 0.47475 2.47E-07 26 28 0.66078 9.36E-07
    x7 6 8 0.25959 4.99E-07 15 32 0.57791 2.94E-08 29 31 0.76046 6.85E-07
    x8 5 7 0.21840 5.11E-07 12 32 0.56030 1.73E-07 25 27 0.64154 5.20E-07

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 3 and 4.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P3 100 x1 1 3 0.00405 5.21E-07 5 11 0.00649 7.68E-08 18 20 0.00967 6.03E-07
    x2 4 6 0.00493 6.00E-07 4 6 0.00522 5.22E-08 22 24 0.01377 9.29E-07
    x3 5 7 0.00535 8.96E-08 4 6 0.00741 1.27E-07 23 25 0.02699 7.28E-07
    x4 5 7 0.00573 2.23E-07 4 6 0.00531 2.71E-08 23 25 0.01227 8.92E-07
    x5 4 6 0.00651 2.15E-07 9 21 0.00871 3.02E-07 24 26 0.01291 8.51E-07
    x6 5 7 0.00544 1.86E-08 4 6 0.00515 9.44E-08 23 25 0.01735 5.54E-07
    x7 5 7 0.00836 4.54E-07 9 23 0.01367 4.88E-07 24 26 0.01546 7.63E-07
    x8 3 5 0.00551 8.29E-08 3 5 0.00485 3.80E-09 21 23 0.01426 7.52E-07
    10,000 x1 2 4 0.01347 1.65E-07 5 11 0.02473 2.43E-07 19 21 0.10561 9.54E-07
    x2 5 7 0.02757 1.93E-08 4 6 0.02067 1.65E-07 24 26 0.06887 7.35E-07
    x3 5 7 0.02884 2.83E-07 4 6 0.01880 4.02E-07 25 27 0.07398 5.75E-07
    x4 5 7 0.02524 7.06E-07 4 6 0.01674 8.56E-08 25 27 0.11716 7.05E-07
    x5 4 6 0.02281 6.79E-07 9 21 0.05192 9.55E-07 26 28 0.08110 6.73E-07
    x6 5 7 0.02662 5.89E-08 4 6 0.01662 2.99E-07 24 26 0.06970 8.76E-07
    x7 6 8 0.03420 8.57E-08 10 24 0.06219 5.62E-08 26 28 0.08495 6.03E-07
    x8 3 5 0.01816 2.62E-07 3 5 0.02704 1.20E-08 23 25 0.06769 5.95E-07
    100,000 x1 2 4 0.09912 5.21E-07 5 11 0.17670 7.68E-07 21 23 0.42251 7.54E-07
    x2 5 7 0.18389 6.11E-08 4 6 0.11846 5.22E-07 26 28 0.51949 5.81E-07
    x3 5 7 0.19643 8.96E-07 5 7 0.13611 3.34E-08 26 28 0.65490 9.10E-07
    x4 6 8 0.20836 1.35E-07 4 6 0.14656 2.71E-07 27 29 0.58802 5.58E-07
    x5 5 7 0.19077 5.24E-09 10 22 0.35069 9.62E-08 28 30 0.61661 5.32E-07
    x6 5 7 0.18573 1.86E-07 4 6 0.11910 9.44E-07 26 28 0.52935 6.93E-07
    x7 6 8 0.22130 2.71E-07 10 24 0.32169 1.78E-07 27 29 0.54962 9.53E-07
    x8 3 5 0.12949 8.29E-07 3 5 0.12552 3.80E-08 24 26 0.48600 9.40E-07
    P4 100 x1 2 4 0.00421 6.79E-07 5 11 0.01502 8.51E-08 18 20 0.01046 6.09E-07
    x2 6 8 0.00636 7.19E-07 10 26 0.01235 3.96E-07 23 25 0.01253 6.12E-07
    x3 5 7 0.00568 5.71E-07 8 16 0.01254 1.54E-07 24 26 0.01194 5.79E-07
    x4 5 7 0.00797 2.11E-08 14 32 0.01171 9.25E-08 24 26 0.01520 8.15E-07
    x5 5 7 0.00859 8.54E-07 13 29 0.01011 3.14E-08 27 29 0.01864 7.33E-07
    x6 5 7 0.00707 2.46E-07 6 12 0.00468 1.16E-07 23 25 0.01120 7.76E-07
    x7 6 8 0.00707 2.00E-08 13 30 0.01238 5.36E-08 26 28 0.01676 7.95E-07
    x8 4 6 0.00606 9.67E-07 10 26 0.01546 6.07E-08 21 23 0.01189 8.35E-07
    10,000 x1 3 5 0.01823 2.05E-07 5 11 0.02307 2.69E-07 19 21 0.05790 9.63E-07
    x2 7 9 0.04195 2.07E-07 11 27 0.05787 9.78E-09 24 26 0.07274 9.67E-07
    x3 6 8 0.03112 1.03E-07 8 16 0.08280 4.87E-07 25 27 0.07898 9.16E-07
    x4 5 7 0.02690 6.67E-08 14 32 0.06476 2.93E-07 26 28 0.07790 6.44E-07
    x5 6 8 0.03229 2.15E-07 13 29 0.07800 9.93E-08 29 31 0.08593 5.79E-07
    x6 5 7 0.02941 7.78E-07 6 12 0.02703 3.67E-07 25 27 0.07716 6.13E-07
    x7 6 8 0.03457 6.32E-08 13 30 0.08342 1.70E-07 28 30 0.08366 6.28E-07
    x8 5 7 0.02739 1.69E-07 10 26 0.09130 1.92E-07 23 25 0.07009 6.60E-07
    100,000 x1 3 5 0.13002 6.49E-07 5 11 0.17798 8.51E-07 21 23 0.46367 7.62E-07
    x2 7 9 0.28717 6.55E-07 11 27 0.40900 3.09E-08 26 28 0.52167 7.65E-07
    x3 6 8 0.22485 3.27E-07 9 17 0.27010 2.59E-08 27 29 0.55878 7.24E-07
    x4 5 7 0.19709 2.11E-07 14 32 0.43011 9.25E-07 28 30 0.57757 5.09E-07
    x5 6 8 0.23647 6.81E-07 13 29 0.43131 3.14E-07 30 32 0.62163 9.16E-07
    x6 6 8 0.21179 4.71E-08 7 13 0.22683 5.03E-08 26 28 0.55034 9.70E-07
    x7 6 8 0.22048 2.00E-07 13 30 0.46398 5.36E-07 29 31 0.59930 9.94E-07
    x8 5 7 0.19480 5.33E-07 10 26 0.38605 6.07E-07 25 27 0.51780 5.22E-07

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 5 and 6.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P5 100 x1 3 5 0.00521 5.77E-08 3 5 0.00726 1.13E-08 18 20 0.00915 5.97E-07
    x2 4 6 0.00448 6.42E-07 5 9 0.01017 1.50E-08 22 24 0.00982 7.30E-07
    x3 4 6 0.00692 9.05E-09 8 18 0.01116 3.65E-08 22 24 0.01166 9.94E-07
    x4 5 7 0.00742 1.53E-07 21 62 0.01553 5.63E-07 23 25 0.01444 5.54E-07
    x5 5 7 0.00746 1.75E-07 9 22 0.02207 1.03E-08 18 20 0.01136 5.80E-07
    x6 5 7 0.00686 1.15E-07 7 15 0.00694 1.46E-08 22 24 0.00949 8.31E-07
    x7 5 7 0.00876 5.88E-07 10 21 0.00613 4.14E-08 22 24 0.01248 9.21E-07
    x8 5 7 0.00641 4.86E-08 2 4 0.00849 2.99E-08 21 23 0.01139 6.82E-07
    10,000 x1 3 5 0.02278 1.82E-07 3 5 0.01208 3.57E-08 19 21 0.05288 9.44E-07
    x2 5 7 0.02484 2.31E-08 5 9 0.03458 4.76E-08 24 26 0.06366 5.77E-07
    x3 4 6 0.01991 2.86E-08 8 18 0.03641 1.16E-07 24 26 0.06583 7.86E-07
    x4 5 7 0.03262 4.84E-07 22 65 0.09893 9.08E-07 24 26 0.08210 8.76E-07
    x5 5 7 0.03566 5.55E-07 9 22 0.04446 3.27E-08 19 21 0.09265 9.17E-07
    x6 5 7 0.02447 3.64E-07 7 15 0.02887 4.62E-08 24 26 0.08414 6.57E-07
    x7 6 8 0.04114 4.21E-08 10 21 0.03511 1.31E-07 24 26 0.06352 7.28E-07
    x8 5 7 0.02399 1.54E-07 2 4 0.02380 9.45E-08 23 25 0.06055 5.39E-07
    100,000 x1 3 5 0.10503 5.77E-07 3 5 0.09026 1.13E-07 21 23 0.41393 7.46E-07
    x2 5 7 0.16283 7.31E-08 5 9 0.12978 1.50E-07 25 27 0.42275 9.13E-07
    x3 4 6 0.14198 9.05E-08 8 18 0.24298 3.65E-07 26 28 0.47482 6.21E-07
    x4 6 8 0.18711 9.98E-08 23 67 0.68963 2.67E-08 26 28 0.41348 6.93E-07
    x5 6 8 0.21443 1.30E-07 9 22 0.42998 1.03E-07 21 23 0.34320 7.25E-07
    x6 6 8 0.17874 7.04E-08 7 15 0.25166 1.46E-07 26 28 0.42702 5.20E-07
    x7 6 8 0.23317 1.33E-07 10 21 0.23967 4.14E-07 26 28 0.47965 5.76E-07
    x8 5 7 0.15034 4.86E-07 2 4 0.05786 2.99E-07 24 26 0.42801 8.52E-07
    P6 100 x1 3 4 0.00839 0.00E+00 46 338 0.08394 0.00E+00 5 17 0.01222 0.00E+00
    x2 3 4 0.00855 0.00E+00 1000 1000 10.00000 NAN 4 27 0.01917 0.00E+00
    x3 3 4 0.00985 0.00E+00 1000 1000 1.00000 NAN 4 28 0.00795 0.00E+00
    x4 3 4 0.01189 0.00E+00 152 980 0.17760 9.03E-07 4 30 0.02202 0.00E+00
    x5 2 3 0.00952 0.00E+00 7 56 0.01307 0.00E+00 2 15 0.00794 0.00E+00
    x6 3 4 0.00860 0.00E+00 34 239 0.07584 0.00E+00 4 27 0.01482 0.00E+00
    x7 2 3 0.00995 0.00E+00 140 908 0.20564 9.46E-07 3 26 0.01125 0.00E+00
    x8 3 4 0.01060 0.00E+00 33 245 0.06110 0.00E+00 4 27 0.01564 0.00E+00
    10,000 x1 3 4 0.05055 0.00E+00 10 73 0.14858 0.00E+00 4 27 0.08237 0.00E+00
    x2 3 4 0.05251 0.00E+00 8 57 0.11160 0.00E+00 3 15 0.09650 0.00E+00
    x3 3 4 0.05212 0.00E+00 9 65 0.14487 0.00E+00 3 15 0.03633 0.00E+00
    x4 2 3 0.03333 0.00E+00 11 76 0.13786 0.00E+00 4 27 0.06110 0.00E+00
    x5 2 3 0.04027 0.00E+00 4 35 0.07081 0.00E+00 2 14 0.04709 0.00E+00
    x6 2 3 0.03202 0.00E+00 8 62 0.14194 0.00E+00 3 15 0.07708 0.00E+00
    x7 2 3 0.04137 0.00E+00 8 66 0.13983 0.00E+00 2 14 0.02688 0.00E+00
    x8 3 4 0.05385 0.00E+00 14 94 0.18096 0.00E+00 3 15 0.05561 0.00E+00
    100,000 x1 3 4 0.42789 0.00E+00 11 76 1.30552 0.00E+00 3 15 0.34265 0.00E+00
    x2 3 4 0.43296 0.00E+00 5 39 0.65111 0.00E+00 3 15 0.27402 0.00E+00
    x3 2 3 0.24661 0.00E+00 4 29 0.51169 0.00E+00 3 15 0.27954 0.00E+00
    x4 2 3 0.25837 0.00E+00 10 80 1.24010 0.00E+00 3 15 0.26327 0.00E+00
    x5 2 3 0.45543 0.00E+00 3 24 0.36586 0.00E+00 2 14 0.23537 0.00E+00
    x6 2 3 0.30170 0.00E+00 4 29 0.44399 0.00E+00 3 15 0.31806 0.00E+00
    x7 2 3 0.35130 0.00E+00 6 48 0.82400 0.00E+00 2 14 0.22764 0.00E+00
    x8 3 4 0.42526 0.00E+00 9 73 1.14171 0.00E+00 3 15 0.26653 0.00E+00

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 7 and 8.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P7 100 x1 2 3 0.00983 0.00E+00 10 40 0.01654 9.18E-07 3 21 0.01528 9.21E-08
    x2 4 5 0.01344 0.00E+00 8 31 0.01130 6.62E-07 4 25 0.01546 2.26E-07
    x3 3 4 0.00996 0.00E+00 10 37 0.02569 8.05E-08 4 24 0.00975 4.24E-07
    x4 2 3 0.01092 0.00E+00 9 32 0.01675 8.46E-07 4 23 0.00892 1.88E-07
    x5 3 4 0.01234 0.00E+00 12 39 0.01611 4.17E-07 5 26 0.00956 8.28E-08
    x6 2 3 0.01022 0.00E+00 8 30 0.01357 8.53E-07 4 24 0.01195 1.77E-08
    x7 2 3 0.01010 0.00E+00 11 38 0.02002 1.56E-07 4 21 0.01527 8.27E-07
    x8 2 3 0.00768 0.00E+00 20 84 0.02336 7.22E-07 4 26 0.01213 6.94E-08
    10,000 x1 2 3 0.06824 0.00E+00 11 43 0.14888 1.47E-07 3 21 0.12112 2.72E-07
    x2 4 5 0.10523 0.00E+00 9 34 0.15687 4.27E-07 4 25 0.05985 6.66E-07
    x3 3 4 0.07267 0.00E+00 10 37 0.10337 2.21E-07 5 29 0.07142 1.98E-08
    x4 2 3 0.04905 0.00E+00 11 40 0.10614 4.79E-07 4 23 0.09871 5.47E-07
    x5 3 4 0.07045 0.00E+00 13 42 0.12128 9.73E-08 5 26 0.06463 2.39E-07
    x6 2 3 0.05264 0.00E+00 6 23 0.09211 9.73E-07 4 24 0.07148 4.95E-08
    x7 2 3 0.04642 0.00E+00 11 38 0.13869 5.50E-07 5 26 0.06866 3.71E-08
    x8 2 3 0.06676 0.00E+00 19 79 0.18702 1.47E-07 4 26 0.10791 2.04E-07
    100,000 x1 2 3 0.40501 0.00E+00 11 43 0.90189 4.68E-07 3 21 0.36597 8.55E-07
    x2 4 5 0.81463 0.00E+00 10 38 0.82053 6.24E-07 5 30 0.53294 3.31E-08
    x3 3 4 0.58979 0.00E+00 10 37 0.74230 6.87E-07 5 29 0.59631 6.21E-08
    x4 2 3 0.40452 0.00E+00 12 43 0.94642 4.38E-08 5 28 0.47195 2.72E-08
    x5 3 4 0.54303 0.00E+00 13 42 0.94394 3.09E-07 5 26 0.46243 7.49E-07
    x6 2 3 0.37559 0.00E+00 12 43 0.93140 1.86E-07 4 24 0.53569 1.55E-07
    x7 2 3 0.36851 0.00E+00 12 41 0.94784 6.84E-07 5 26 0.48445 1.16E-07
    x8 2 3 0.36777 0.00E+00 19 79 1.60547 5.84E-07 4 26 0.44721 6.40E-07
    P8 100 x1 5 7 0.01136 9.18E-10 9 25 0.01476 3.16E-07 6 8 0.02509 3.03E-07
    x2 5 7 0.01256 2.65E-08 8 21 0.02121 2.11E-07 6 8 0.00575 9.77E-08
    x3 5 7 0.01537 1.47E-07 7 19 0.01729 7.34E-07 5 7 0.00825 4.28E-07
    x4 3 5 0.00977 2.96E-07 6 17 0.01466 4.67E-07 4 6 0.00492 6.75E-07
    x5 5 7 0.01473 2.63E-07 15 48 0.02332 4.57E-07 6 8 0.00507 5.95E-07
    x6 4 6 0.01137 9.69E-07 8 21 0.01054 1.83E-07 6 8 0.00733 7.04E-08
    x7 5 7 0.01011 2.87E-07 6 15 0.00987 3.03E-07 5 7 0.00516 6.75E-07
    x8 5 7 0.01265 1.20E-07 7 19 0.00618 7.72E-08 6 8 0.00804 2.09E-07
    10,000 x1 5 7 0.06854 2.90E-09 9 25 0.08002 9.99E-07 6 8 0.02902 9.58E-07
    x2 5 7 0.06297 8.37E-08 8 21 0.04202 6.66E-07 6 8 0.02848 3.09E-07
    x3 5 7 0.07992 4.64E-07 8 21 0.03989 2.11E-07 6 8 0.03310 8.64E-08
    x4 3 5 0.04594 9.38E-07 7 19 0.04275 2.07E-07 5 7 0.02787 1.36E-07
    x5 5 7 0.06013 8.32E-07 16 50 0.11420 2.29E-07 7 9 0.03078 1.20E-07
    x6 5 7 0.06760 2.76E-07 8 21 0.04725 5.79E-07 6 8 0.02689 2.22E-07
    x7 5 7 0.06589 9.06E-07 6 15 0.03220 9.58E-07 6 8 0.04191 1.36E-07
    x8 5 7 0.06528 3.80E-07 7 19 0.04509 2.44E-07 6 8 0.02953 6.62E-07
    100,000 x1 5 7 0.47364 9.18E-09 10 27 0.45492 1.76E-07 7 9 0.22843 1.93E-07
    x2 5 7 0.46010 2.65E-07 9 23 0.40843 1.38E-07 6 8 0.27425 9.77E-07
    x3 6 8 0.58311 8.89E-08 8 21 0.35741 6.68E-07 6 8 0.23392 2.73E-07
    x4 4 6 0.38535 5.46E-08 7 19 0.31560 6.55E-07 5 7 0.18196 4.31E-07
    x5 6 8 0.52697 1.25E-07 16 50 0.76155 7.23E-07 7 9 0.27878 3.80E-07
    x6 5 7 0.48573 8.73E-07 9 23 0.38114 1.55E-07 6 8 0.20102 7.04E-07
    x7 6 8 0.57955 7.26E-08 7 17 0.30548 7.91E-09 6 8 0.22101 4.31E-07
    x8 6 8 0.53726 4.92E-08 7 19 0.27165 7.72E-07 7 9 0.22957 1.34E-07

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical of UMCD, MDDYM & DTCG1 methods for problems 9 and 10.
    Problem Dim IP UMCD MDDYM DTCG1
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    P9 100 x1 1 2 0.00576 0.00E+00 3 13 0.01774 2.29E-08 4 51 0.02559 4.25E-08
    x2 1 2 0.00585 0.00E+00 11 55 0.01757 9.95E-07 5 62 0.01398 4.79E-08
    x3 1 2 0.00608 0.00E+00 2 6 0.01028 0.00E+00 5 63 0.02335 9.88E-08
    x4 1 2 0.00350 0.00E+00 1 2 0.00265 0.00E+00 5 63 0.01456 9.45E-08
    x5 1 2 0.00404 0.00E+00 9 42 0.02898 6.14E-07 1 6 0.00465 0.00E+00
    x6 1 2 0.00617 0.00E+00 6 28 0.00710 1.17E-07 5 62 0.01665 3.89E-08
    x7 1 2 0.00327 0.00E+00 1 3 0.00285 0.00E+00 1 3 0.00293 0.00E+00
    x8 1 2 0.00471 0.00E+00 2 10 0.00989 1.96E-07 5 62 0.02565 4.05E-08
    10,000 x1 1 2 0.01833 0.00E+00 3 13 0.03811 7.25E-08 4 51 0.06791 1.34E-07
    x2 1 2 0.02078 0.00E+00 12 58 0.10703 4.37E-08 5 62 0.13277 1.52E-07
    x3 1 2 0.02853 0.00E+00 2 6 0.01617 0.00E+00 5 63 0.08501 3.13E-07
    x4 1 2 0.00851 0.00E+00 1 2 0.01422 0.00E+00 5 63 0.16091 2.99E-07
    x5 1 2 0.01172 0.00E+00 10 45 0.12989 3.06E-08 1 6 0.01353 0.00E+00
    x6 1 2 0.02217 0.00E+00 6 28 0.06078 3.69E-07 5 62 0.08562 1.23E-07
    x7 1 2 0.01247 0.00E+00 1 3 0.01702 0.00E+00 1 3 0.01811 0.00E+00
    x8 1 2 0.02245 0.00E+00 2 10 0.01802 6.18E-07 5 62 0.15127 1.28E-07
    100,000 x1 1 2 0.13106 0.00E+00 3 13 0.16577 2.29E-07 4 51 0.44893 4.25E-07
    x2 1 2 0.15448 0.00E+00 12 58 0.67073 1.38E-07 5 62 0.54013 4.79E-07
    x3 1 2 0.15167 0.00E+00 2 6 0.11161 0.00E+00 5 63 0.54475 9.88E-07
    x4 1 2 0.04368 0.00E+00 1 2 0.04261 0.00E+00 5 63 0.66749 9.45E-07
    x5 1 2 0.07560 0.00E+00 10 45 0.57314 9.68E-08 1 6 0.14915 0.00E+00
    x6 1 2 0.16841 0.00E+00 7 31 0.37625 3.90E-09 5 62 0.57907 3.89E-07
    x7 1 2 0.07196 0.00E+00 1 3 0.06191 0.00E+00 1 3 0.08504 0.00E+00
    x8 1 2 0.13712 0.00E+00 3 13 0.21464 7.38E-08 5 62 0.57875 4.05E-07
    P8 100 x1 1 2 0.00755 0.00E+00 3 17 0.00671 3.90E-08 1 13 0.00857 0.00E+00
    x2 1 2 0.00509 0.00E+00 13 77 0.01799 3.98E-08 1 13 0.00462 0.00E+00
    x3 1 2 0.00464 0.00E+00 5 27 0.01040 4.93E-07 1 13 0.00612 0.00E+00
    x4 1 2 0.00671 0.00E+00 14 83 0.05777 1.15E-08 1 13 0.00899 0.00E+00
    x5 1 2 0.00667 0.00E+00 12 66 0.03522 2.49E-07 1 13 0.01943 0.00E+00
    x6 1 2 0.00667 0.00E+00 9 51 0.01460 3.71E-08 1 13 0.01309 0.00E+00
    x7 1 2 0.00474 0.00E+00 13 71 0.04436 2.83E-07 1 13 0.00668 0.00E+00
    x8 1 2 0.00468 0.00E+00 4 21 0.00680 1.48E-07 1 13 0.00825 0.00E+00
    10,000 x1 1 2 0.02513 0.00E+00 3 17 0.03592 1.23E-07 1 13 0.04248 0.00E+00
    x2 1 2 0.02792 0.00E+00 13 77 0.20699 1.26E-07 1 13 0.02220 0.00E+00
    x3 1 2 0.02721 0.00E+00 6 31 0.06834 1.87E-08 1 13 0.04022 0.00E+00
    x4 1 2 0.02680 0.00E+00 14 83 0.15190 3.63E-08 1 13 0.04513 0.00E+00
    x5 1 2 0.02727 0.00E+00 12 66 0.12581 7.88E-07 1 13 0.02469 0.00E+00
    x6 1 2 0.03042 0.00E+00 9 51 0.09754 1.17E-07 1 13 0.04831 0.00E+00
    x7 1 2 0.02655 0.00E+00 13 71 0.13567 8.94E-07 1 13 0.04894 0.00E+00
    x8 1 2 0.02428 0.00E+00 4 21 0.04477 4.67E-07 1 13 0.05580 0.00E+00
    100,000 x1 1 2 0.17512 0.00E+00 3 17 0.27594 3.90E-07 1 13 0.16893 0.00E+00
    x2 1 2 0.19268 0.00E+00 13 77 1.08727 3.98E-07 1 13 0.19849 0.00E+00
    x3 1 2 0.21737 0.00E+00 6 31 0.53243 5.92E-08 1 13 0.21012 0.00E+00
    x4 1 2 0.20273 0.00E+00 14 83 1.17916 1.15E-07 1 13 0.21843 0.00E+00
    x5 1 2 0.19913 0.00E+00 13 70 1.19117 2.46E-08 1 13 0.19528 0.00E+00
    x6 1 2 0.19263 0.00E+00 9 51 0.74924 3.71E-07 1 13 0.18907 0.00E+00
    x7 1 2 0.19943 0.00E+00 14 75 1.10069 1.27E-07 1 13 0.21608 0.00E+00
    x8 1 2 0.18627 0.00E+00 5 25 0.37949 4.78E-08 1 13 0.16808 0.00E+00

     | Show Table
    DownLoad: CSV

    To ensure fairness, the five algorithms were run with the same initial starting points and dimensions. As observed from the tables, the UMCD algorithm solves all the test functions with the fewest number of iterations, CPU time, and function evaluations compared to the DTCG1 and MDDYM algorithms. Based on these results, the UMCD algorithm outperformed both MDDYM and DTCG1.

    Table 6 presents the numerical comparison of the proposed algorithm with the existing CHCG and ZANG algorithms using problem Problems 11 and 12. In the table, significant performance of the proposed algorithm is observed over MCHCG and ZANG algorithms. In view of the above, the proposed method is considered as the optimal choice due to its excellence in the number of iterations and overall CPU time efficiency.

    Table 6.  Experiments of UMCD, MCHCG & ZANG methods for problems 11 & 12.
    Problem Dim IP UMCD MCHCG ZANG
    Iter Fval Time(s) Hk Iter Fval Time(s) Hk Iter Fval Time(s) Hk
    11 1000 x1 15 16 0.03816 0.00E+00 6 7 0.02527 1.50E-07 10 12 0.02170 6.42E-07
    x2 9 10 0.03703 0.00E+00 6 8 0.02020 3.29E-07 13 14 0.00902 7.17E-07
    x3 4 5 0.00993 0.00E+00 6 8 0.02159 1.43E-07 13 14 0.01132 9.79E-07
    x4 4 5 0.01202 0.00E+00 6 8 0.02249 6.13E-07 13 15 0.01018 4.40E-07
    x5 4 5 0.01421 0.00E+00 6 8 0.02213 2.25E-07 13 15 0.01131 8.72E-07
    x6 4 5 0.01246 0.00E+00 6 8 0.02175 2.78E-07 13 14 0.01177 8.16E-07
    x7 2 3 0.00902 0.00E+00 6 8 0.02330 7.48E-07 13 15 0.00834 6.47E-07
    x8 4 5 0.01425 0.00E+00 6 8 0.02475 1.15E-07 12 14 0.01199 4.69E-07
    10,000 x1 4 5 0.06147 0.00E+00 6 7 0.11607 2.81E-07 11 13 0.04875 5.18E-07
    x2 14 15 0.18633 0.00E+00 6 8 0.13784 9.77E-07 13 15 0.05500 7.83E-07
    x3 9 10 0.11854 0.00E+00 6 8 0.13576 3.88E-07 14 15 0.06108 7.60E-07
    x4 4 5 0.07767 0.00E+00 8 9 0.15722 1.02E-07 14 15 0.05532 8.69E-07
    x5 4 5 0.05527 0.00E+00 8 9 0.14186 1.25E-07 14 16 0.05873 7.08E-07
    x6 13 14 0.17925 0.00E+00 6 8 0.12612 8.07E-07 13 15 0.05752 8.97E-07
    x7 2 3 0.03364 0.00E+00 7 9 0.15516 3.37E-07 14 16 0.05924 5.33E-07
    x8 6 7 0.09767 0.00E+00 6 8 0.14821 9.00E-08 13 14 0.05091 9.12E-07
    100,000 x1 6 7 0.60125 0.00E+00 6 7 0.79950 7.90E-07 12 14 0.33064 4.44E-07
    x2 6 7 0.56274 0.00E+00 8 9 1.16167 2.30E-07 14 16 0.37496 6.69E-07
    x3 18 19 1.77732 0.00E+00 7 9 1.10606 9.02E-08 14 16 0.38147 9.34E-07
    x4 15 16 1.46303 0.00E+00 5 7 0.83329 1.90E-07 15 16 0.39235 7.43E-07
    x5 8 9 0.74268 9.80E-07 7 9 1.14606 3.13E-07 15 17 0.40514 6.08E-07
    x6 6 7 0.61239 0.00E+00 8 9 1.13741 1.89E-07 14 16 0.38782 7.66E-07
    x7 2 3 0.24444 0.00E+00 9 10 1.40988 2.37E-07 15 17 0.41061 4.59E-07
    x8 5 6 0.40187 0.00E+00 6 8 0.96455 1.73E-07 14 15 0.36014 7.78E-07
    12 1000 x1 3 5 0.00895 1.51E-07 5 7 0.01557 9.09E-08 10 12 0.00909 5.96E-07
    x2 5 7 0.02239 7.73E-07 6 8 0.01930 1.37E-07 13 14 0.01035 7.64E-07
    x3 6 8 0.01628 1.05E-07 6 8 0.01651 1.46E-07 13 15 0.01111 4.72E-07
    x4 5 7 0.02212 6.27E-07 6 8 0.02404 1.15E-07 13 15 0.01170 5.77E-07
    x5 6 8 0.02581 2.08E-07 6 8 0.01971 9.05E-07 14 15 0.01159 7.14E-07
    x6 5 7 0.03400 4.33E-07 6 8 0.01731 1.48E-07 13 14 0.01135 9.11E-07
    x7 5 7 0.01919 9.13E-11 6 8 0.01682 8.10E-07 13 15 0.01143 9.57E-07
    x8 4 6 0.00910 5.58E-08 5 7 0.01524 8.43E-07 12 14 0.01086 4.46E-07
    10,000 x1 3 5 0.04222 4.77E-07 5 7 0.10146 2.87E-07 11 13 0.04298 5.16E-07
    x2 6 8 0.07704 1.33E-07 6 8 0.11393 4.33E-07 13 15 0.05166 9.55E-07
    x3 6 8 0.08050 3.31E-07 6 8 0.11742 4.61E-07 14 16 0.05176 4.09E-07
    x4 6 8 0.07692 1.53E-07 6 8 0.11293 3.62E-07 14 16 0.05321 5.00E-07
    x5 6 8 0.08210 6.59E-07 7 9 0.13174 2.17E-07 14 16 0.05245 8.92E-07
    x6 6 8 0.07382 5.77E-08 6 8 0.12074 4.66E-07 14 15 0.04957 7.89E-07
    x7 5 7 0.06370 2.89E-10 7 9 0.13564 1.93E-07 14 16 0.05402 8.30E-07
    x8 4 6 0.07024 1.76E-07 6 8 0.11668 2.09E-07 13 14 0.04749 9.79E-07
    100,000 x1 4 6 0.37268 4.97E-08 5 7 0.77854 8.72E-07 12 14 0.30186 4.47E-07
    x2 6 8 0.57496 4.20E-07 7 9 1.01415 1.06E-07 14 16 0.34227 8.27E-07
    x3 7 9 0.66245 3.98E-08 7 9 1.02851 1.11E-07 15 16 0.35078 8.97E-07
    x4 6 8 0.57558 4.83E-07 7 9 1.02459 8.73E-08 15 17 0.36453 4.33E-07
    x5 7 9 0.66371 6.60E-08 7 9 1.03310 6.82E-07 15 17 0.36946 7.73E-07
    x6 6 8 0.55160 1.83E-07 7 9 1.00286 1.13E-07 14 16 0.35139 9.86E-07
    x7 5 7 0.48683 9.13E-10 7 9 1.00712 6.10E-07 15 17 0.36532 7.19E-07
    x8 4 6 0.38062 5.58E-07 6 8 0.85922 6.62E-07 14 15 0.32884 8.48E-07

     | Show Table
    DownLoad: CSV

    Furthermore, using Dolan and Morˊe performance profile [42], Figures 13 were plotted using the data from Tables 15. Each figure illustrates the percentage P(τ) of problems where a specific method works within a factor τ of the best possible execution time. Figure 1 compares the proposed algorithm's (UMCD) performance with that of the MDDYM and DTCG1 algorithms based on the number of iterations. Figures 2 and 3 compare the performances of the three algorithms based on CPU time and function evaluations, respectively. From the graphs, it is clearly evident that the proposed algorithm (UMCD) outperforms the other two algorithms across the three performance metrics used. Moreover, Figures 46 were plotted using the data from Table 6, all of which demonstrate the efficiency of the proposed algorithm over the compared algorithms. While the performance of the MIDSL algorithm is acceptable, the UMCD algorithm is considered the best choice when considering the overall performance.

    Figure 1.  Performance profile for the number of iterations.
    Figure 2.  Performance profile for the function evaluations.
    Figure 3.  Performance profile for the CPU Time (in seconds).
    Figure 4.  Performance profile of Problems 11 & 12 for the iter.
    Figure 5.  Performance profile of Problems 11 & 12 for the Fval.
    Figure 6.  Performance profile of Problems 11 & 12 for the Time(s).

    Among the applications of the proposed algorithm is signal and image recovery in compressive sensing (CS). Compressed sensing, or compressive sensing, is concerned with the reconstruction of sparse signals from incoherent measurements. It asserts that some signals in real applications have a better representation in a specific domain after transformation, while the remaining ones are insignificant or zero. It replaces the traditional sampling technique, thereby reducing the number of samples for better signal sampling and reconstruction. The technique depends mainly on mathematical algorithms, especially derivative-free-based algorithms that require less storage. Therefore, CG algorithms are a good alternative in this respect. The CS theorem is present in biological engineering, medical sciences, and various fields of science and engineering [43,44]. It uses the following linear equation to recover the sparse signals:

    ϕ=Nx, (4.1)

    xRn, ϕRm, NRmxn (m<<n) is a linear operator. The popular approach for solving the sparse solution is to minimize the relation below

    minx12ϕNx22+τx1, (4.2)

    τ is a positive parameter (regularized). Equation (4.2) is non-smooth due to the presence of the 1-norm; therefore, derivative-free algorithms cannot be applied to solve it. As such, gradient projection for sparse reconstruction (GPSR) was proposed by Figueredo et al. [45] is among the best approaches for solving (4.2). In [45], problem (4.2) is written as

    x=uv,u0,v0,u,vRn, (4.3)

    ui=(xi)+, and vi=(xi)+ for i=1,2,...,n, where (.)+ denotes positive-part operator, which is defined as (x)+=max{0,x}. Using 1-norm definition, it implies x1=eTnu+eTnv, with en=(1,1,...,1)TRn and problem (4.2) become

    minu,v12ϕN(uv)22+τeTnu+τeTnv,u,v0. (4.4)

    Figueredo et al., [45], demonstrate how to express problem (4.4) in a more formal bound quadratic programming problem as

    minz12zTHz+cTz,s.tz0, (4.5)

    where z=(uv), c=τe2n+(bb), b=NTϕ, H=(BTBBTBBTBBTB). The matrix H is a positive semi-definite. Equation (4.5) can be transformed to a linear variable inequality (LVT), as follows:

    H(z)=min{z,Hz+c}=0. (4.6)

    The function Hk in (4.5) is a vector-valued function; it was shown to be Lipschitz continuous and monotone in Lemma 3 of [46] and Lemma 2.2 of [47]. Hence, our proposed algorithm, the UMCD algorithm, could be a good choice for solving it. Numerical comparison is carried out in comparison with the CHCG algorithm in signal reconstruction and recovery. The size of the signal is considered to be n=212 with k=210. The error, termed as means square error (MSE), is used to assess the quality of the restoration process; it is measured by

    MSE=1nˆx˜x2, (4.7)

    where ˆx refers to the original signal, and ˜x is the recovered signal. The experiment starts by initializing x0=NTϕ, conducting ten (10) trials, and recording the data of interest for each trial. It is set to stop when |fkfk1||fk1|<105, where f(x)=12Nxϕ22+τx1 is a merit function. The following information is recorded:

    As reported in figures 78 and table 7, both algorithms performed well in recovering the signals with a reasonable degree of accuracy. But considering the number of iterations (ITER) and CPU time (TIMES) obtained by two algorithms; UMCD and CHCG as reported in table 7, the UMCD algorithm outperformed the performance of the CHCG algorithm.

    Figure 7.  From top to bottom, the original signal, the measurement, the recovery signals recovered by UMCD and CHCG algorithms respectively.
    Figure 8.  These figures compares UMCD and CHCG algorithms in signal recovery.
    Table 7.  Ten trials recorded of UMCD and CHCG signal's recovery with 1-norm regularization.
    UMCD CHCG
    MSE ITER TIME(s) MSE ITER TIME(s)
    2.13E-04 91 2.76 2.30E-05 137 2.06
    8.16E-06 92 2.52 1.96E-05 141 2.06
    8.11E-06 89 2.61 2.38E-05 121 3.55
    2.09E-04 91 2.52 2.38E-05 135 2.98
    4.62E-05 84 2.34 1.62E-05 140 3.09
    7.70E-06 96 2.88 1.85E-05 125 3.77
    2.72E-05 87 2.72 4.31E-05 135 3.16
    1.09E-05 84 2.53 1.94E-05 147 3.17
    0.74E-05 75 2.25 2.10E-04 108 0.41
    1.19E-05 72 2.53 2.71E-05 122 4.53
    AVERAGE 3.62E-05 92.8 2.56 4.24E-05 132 2.87

     | Show Table
    DownLoad: CSV

    This paper presents "An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications". The proposed scheme satisfies the descent condition by conducting an eigenvalue analysis of the search direction matrix. Unlike the methods presented by Yu and Guan [33] and Yu et al. [34], which proved global convergence for the interval C(14,) only, the proposed method extends the interval to C(0,), and global convergence is proved. Furthermore, the numerical performance of the proposed algorithm, in comparison with MDDYM (2021) [37], DTCG1 [6], ZANG [38], and MCHCG [39] algorithms, confirms the efficiency of the proposed algorithm. Being derivative-free, the proposed algorithm is applied to compressive sensing for signal recovery problems. To measure the capacity of the proposed algorithm for signal recovery problems, the experiment is run together with the CHCG algorithm, a well-known algorithm for signal recovery. The results of the experiment are recorded and indicate the efficiency of the proposed algorithm.

    H. A. conceptualized the study, developed the methodology, conducted theoretical analysis, and drafted the initial manuscript. A. K. A. and M. Y. W. supervised the research, validated the results, refined the proofs, and critically reviewed the mathematical framework. A. S. H. and K. A. refinement of proofs, implemented the algorithm, conducted computational experiments, and analyzed the data. I. A. R. M. literature review, manuscript draft, and funding. S. M. I. revised the manuscript, result interpretation, and manuscript editing. Y. B. M. provided insights into signal recovery applications and refined the results. E. M. N. verified computational accuracy, assisted with proofreading, formatting, and ensuring compliance with journal requirements. All authors read and approved the final manuscript.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to Research Management Centre (RMC), Universiti Utara Malaysia through the geran KHAS with SO code 21777 and the Information Systems and Technology Department of the Kuwait Technical College, Kuwait, for their financial support.

    This research is sponsored by the Institutional Based Research (IBR) of the Tertiary Education Trust Fund (TETFund), Sule Lamido University Kafin Hausa, Nigeria.

    The authors declare there is no conflict of interests.



    [1] J. Abrevaya, W. Jiang, A nonparametric approach to measuring and testing curvature, J. Bus. Econom. Statist., 23 (2005), 1–19. https://doi.org/10.1198/073500104000000316 doi: 10.1198/073500104000000316
    [2] A. Ait-Saïdi, F. Ferraty, R. Kassa, P. Vieu, Cross-validated estimations in the single-functional index model, Statistics, 42 (2008), 475–494. https://doi.org/10.1080/02331880801980377 doi: 10.1080/02331880801980377
    [3] I. M. Almanjahie, S. Bouzebda, Z. Chikr Elmezouar, A. Laksaci, The functional kNN estimator of the conditional expectile: uniform consistency in number of neighbors, Stat. Risk Model., 38 (2022a), 47–63. https://doi.org/10.1515/strm-2019-0029 doi: 10.1515/strm-2019-0029
    [4] I. M. Almanjahie, S. Bouzebda, Z. Kaid, A. Laksaci, Nonparametric estimation of expectile regression in functional dependent data, J. Nonparametr. Stat., 34 (2022), 250–281. https://doi.org/10.1080/10485252.2022.2027412 doi: 10.1080/10485252.2022.2027412
    [5] I. M. Almanjahie, S. Bouzebda, Z. Kaid, A. Laksaci, The local linear functional kNN estimator of the conditional expectile: uniform consistency in number of neighbors, Metrika, 34 (2024), 1–29. https://doi.org/10.1007/s00184-023-00942-0 doi: 10.1007/s00184-023-00942-0
    [6] N. T. Andersen, The central limit theorem for non-separable valued functions, Z. Wahrscheinlichkeitstheor. Verw. Geb., 70 (1985), 445–455.
    [7] P. K. Andersen, O. R. Borgan, R. D. Gill, N. Keiding, Statistical Models Based on Counting Processes, New York: Springer, 1993.
    [8] G. Aneiros, R. Cao, R. Fraiman, C. Genest, P. Vieu, Recent advances in functional data analysis and high-dimensional statistics, J. Multivariate Anal., 170 (2019), 3–9. https://doi.org/10.1016/j.jmva.2018.11.007 doi: 10.1016/j.jmva.2018.11.007
    [9] A. Araujo, E. Giné, The Central Limit Theorem for Real and Banach Valued Random Variables, New York: John Wiley & Sons, 1980.
    [10] M. A. Arcones, The law of large numbers for U-statistics under absolute regularity, Electron. Comm. Probab., 3 (1998), 13–19.
    [11] M. A. Arcones, E. Giné, Limit theorems for U-processes, Ann. Probab., 21 (1993), 1494–1542.
    [12] M. A. Arcones, E. Giné, On the law of the iterated logarithm for canonical U-statistics and processes, Stochast. Process. Appl., 58 (1995), 217–245. https://doi.org/10.1016/0304-4149(94)00023-M doi: 10.1016/0304-4149(94)00023-M
    [13] M. A. Arcones, B. Yu, Central limit theorems for empirical and U-processes of stationary mixing sequences, J. Theor. Probab., 7 (1994), 47–71. https://doi.org/10.1007/BF02213360 doi: 10.1007/BF02213360
    [14] M. A. Arcones, Z. Chen, E. Giné, Estimators related to U-processes with applications to multivariate medians: asymptotic normality, Ann. Statist., 22 (1994), 1460–1477.
    [15] S. Attaoui, N. Ling, Asymptotic results of a nonparametric conditional cumulative distribution estimator in the single functional index modeling for time series data with applications, Metrika, 79 (2016), 485–511. https://doi.org/10.1007/s00184-015-0564-6 doi: 10.1007/s00184-015-0564-6
    [16] S. Attaoui, B. Bentat, S. Bouzebda, A. Laksaci, The strong consistency and asymptotic normality of the kernel estimator type in functional single index model in presence of censored data, AIMS Math., 9 (2024), 7340–7371. http://dx.doi.org/10.3934/math.2024356 doi: 10.3934/math.2024356
    [17] A. K. Basu, A. Kundu, Limit distribution for conditional U-statistics for dependent processes, Calcutta Statist. Assoc. Bull., 52 (2002), 381–407. https://doi.org/10.1177/0008068320020522 doi: 10.1177/0008068320020522
    [18] A. Bellet, A. Habrard, Robustness and generalization for metric learning, Neurocomputing, 151 (2015), 259–267. https://doi.org/10.1016/j.neucom.2014.09.044 doi: 10.1016/j.neucom.2014.09.044
    [19] A. Bellet, A. Habrard, M. Sebban, A survey on metric learning for feature vectors and structured data, preprint paper, 2013. https://doi.org/10.48550/arXiv.1306.6709
    [20] K. Benhenni, F. Ferraty, M. Rachdi, P. Vieu, Local smoothing regression with functional data, Comput. Statist., 22 (2007), 353–369. https://doi.org/10.1007/s00180-007-0045-0 doi: 10.1007/s00180-007-0045-0
    [21] W. Bergsma, A. Dassios, A consistent test of independence based on a sign covariance related to Kendall's tau, Bernoulli, 20 (2014), 1006–1028.
    [22] S. Bernstein, Sur l'extension du théoréme limite du calcul des probabilités aux sommes de quantités dépendantes, Math. Ann., 97 (1927), 1–59. https://doi.org/10.1007/BF01447859 doi: 10.1007/BF01447859
    [23] S. Bhattacharjee, H. G. Müller, Single index Fréchet regression, Ann. Statist., 51 (2023), 1770–1798. https://doi.org/10.1214/23-AOS2307 doi: 10.1214/23-AOS2307
    [24] J. R. Blum, J. Kiefer, M. Rosenblatt, Distribution free tests of independence based on the sample distribution function, Ann. Math. Statist., 32 (1961), 485–498.
    [25] V. I. Bogachev, Gaussian measures, In: Mathematical Surveys and Monographs, Providence: American Mathematical Society, 1998.
    [26] E. G. Bongiorno, A. Goia, Classification methods for Hilbert data based on surrogate density, Comput. Statist. Data Anal., 99 (2016), 204–222. https://doi.org/10.1016/j.csda.2016.01.019 doi: 10.1016/j.csda.2016.01.019
    [27] E. G. Bongiorno, A. Goia, Some insights about the small-ball probability factorization for Hilbert random elements, Statist. Sinica, 27 (2017), 1949–1965.
    [28] E. G. Bongiorno, A. Goia, P. Vieu, Evaluating the complexity of some families of functional data, SORT, 42 (2018), 27–44.
    [29] S. Borovkova, R. Burton, H. Dehling, Consistency of the Takens estimator for the correlation dimension, Ann. Appl. Probab., 9 (1999), 376–390.
    [30] S. Borovkova, R. Burton, H. Dehling, Limit theorems for functionals of mixing processes with applications to U-statistics and dimension estimation, Trans. Amer. Math. Soc., 353 (2001), 4261–4318.
    [31] Y. V. Borovskikh, U-Statistics in Banach Spaces, Utrecht: VSP, 1996.
    [32] D. Bosq, Linear processes in function spaces, In: Lecture Notes in Statistics, New York: Springer, 2000.
    [33] S. Bouzebda, General tests of conditional independence based on empirical processes indexed by functions, Jpn. J. Stat. Data Sci., 6 (2023), 115–177. https://doi.org/10.1007/s42081-023-00193-3 doi: 10.1007/s42081-023-00193-3
    [34] S. Bouzebda, On the weak convergence and the uniform-in-bandwidth consistency of the general conditional U-processes based on the copula representation: multivariate setting, Hacet. J. Math. Stat., 52 (2023), 1303–1348.
    [35] S. Bouzebda, M. Cherfi, General bootstrap for dual ϕ-divergence estimates, J. Probab. Stat., 2012 (2012), 834107. https://doi.org/10.1155/2012/834107 doi: 10.1155/2012/834107
    [36] S. Bouzebda, S. Didi, Multivariate wavelet density and regression estimators for stationary and ergodic discrete time processes: asymptotic results, Comm. Statist. Theory Methods, 46 (2017), 1367–1406. https://doi.org/10.1080/03610926.2015.1019144 doi: 10.1080/03610926.2015.1019144
    [37] S. Bouzebda, S. Didi, Some asymptotic properties of kernel regression estimators of the mode for stationary and ergodic continuous time processes, Rev. Mat. Complut., 34 (2021), 811–852. https://doi.org/10.1007/s13163-020-00368-6 doi: 10.1007/s13163-020-00368-6
    [38] S. Bouzebda, A. Keziou, A semiparametric maximum likelihood ratio test for the change point in copula models, Stat. Methodol., 14 (2013), 39–61. https://doi.org/10.1016/j.stamet.2013.02.003 doi: 10.1016/j.stamet.2013.02.003
    [39] S. Bouzebda, B. Nemouchi, Central limit theorems for conditional empirical and conditional U-processes of stationary mixing sequences, Math. Meth. Stat., 28 (2019), 169–207. https://doi.org/10.3103/S1066530719030013 doi: 10.3103/S1066530719030013
    [40] S. Bouzebda, B. Nemouchi, Weak-convergence of empirical conditional processes and conditional U-processes involving functional mixing data, Stat. Inference Stoch. Process., 26 (2023), 33–88. https://doi.org/10.1007/s11203-022-09276-6 doi: 10.1007/s11203-022-09276-6
    [41] S. Bouzebda, A. Nezzal, Uniform in number of neighbors consistency and weak convergence of kNN empirical conditional processes and kNN conditional U-processes involving functional mixing data, AIMS Math., 9 (2024), 4427–4550. https://doi.org/10.3934/math.2024218 doi: 10.3934/math.2024218
    [42] S. Bouzebda, Soukarieh, Non-parametric conditional U-processes for locally stationary functional random fields under stochastic sampling design, Mathematics, 11 (2023), 16. https://doi.org/10.3390/math11010016 doi: 10.3390/math11010016
    [43] S. Bouzebda, I. Soukarieh, Limit theorems for a class of processes generalizing the U-empirical process, Stochastics, 96 (2024), 799–845. https://doi.org/10.1080/17442508.2024.2320402 doi: 10.1080/17442508.2024.2320402
    [44] S. Bouzebda, N. Taachouche, On the variable bandwidth kernel estimation of conditional U-statistics at optimal rates in sup-norm, Phys. A Stat. Mechan. Appl., 625 (2023), 129000. https://doi.org/10.1016/j.physa.2023.129000 doi: 10.1016/j.physa.2023.129000
    [45] S. Bouzebda, N. Taachouche, Rates of the strong uniform consistency for the kernel-type regression function estimators with general kernels on manifolds, Math. Meth. Stat., 32 (2023), 27–80. https://doi.org/10.3103/S1066530723010027 doi: 10.3103/S1066530723010027
    [46] S. Bouzebda, N. Taachouche, Rates of the strong uniform consistency with rates for conditional U-statistics estimators with general kernels on manifolds, Math. Meth. Stat., in press, 2023.
    [47] S. Bouzebda, S. Didi, L. El Hajj, Multivariate wavelet density and regression estimators for stationary and ergodic continuous time processes: asymptotic results, Math. Meth. Stat., 24 (2015), 163–199. https://doi.org/10.3103/S1066530715030011 doi: 10.3103/S1066530715030011
    [48] S. Bouzebda, I. Elhattab, B. Nemouchi, On the uniform-in-bandwidth consistency of the general conditional U-statistics based on the copula representation, J. Nonparametr. Stat., 33 (2021), 321–358. https://doi.org/10.1080/10485252.2021.1937621 doi: 10.1080/10485252.2021.1937621
    [49] S. Bouzebda, A. Laksaci, M. Mohammedi, The k-nearest neighbors method in single index regression model for functional quasi-associated time series data, Rev. Mat. Complut., 36 (2023), 361–391. https://doi.org/10.1007/s13163-022-00436-z doi: 10.1007/s13163-022-00436-z
    [50] Q. Cao, Z. C. Guo, Y. Ying, Generalization bounds for metric and similarity learning, Mach. Learn., 102 (2016), 115–132. https://doi.org/10.1007/s10994-015-5499-7 doi: 10.1007/s10994-015-5499-7
    [51] A. Carbonez, L. Györfi, E. C. van der Meulen, Partitioning-estimates of a regression function under random censoring, Stat. Risk Model., 13 (1995), 21–37. https://doi.org/10.1524/strm.1995.13.1.21 doi: 10.1524/strm.1995.13.1.21
    [52] D. Chen, P. Hall, H. G. Müller, Single and multiple index functional regression models with nonparametric link, Ann. Statist., 39 (2011), 1720–1747. https://doi.org/10.1214/11-AOS882 doi: 10.1214/11-AOS882
    [53] Y. Chen, S. Datta, Adjustments of multi-sample U-statistics to right censored data and confounding covariates, Comput. Statist. Data Anal., 135 (2019), 1–14. https://doi.org/10.1016/j.csda.2019.01.012 doi: 10.1016/j.csda.2019.01.012
    [54] S. Clémençon, I. Colin, A. Bellet, Scaling-up empirical risk minimization: optimization of incomplete U-statistics, J. Mach. Learn. Res., 17 (2016), 1–36.
    [55] G. B. Cybis, M. Valk, S. R. C. Lopes, Clustering and classification problems in genetics through U-statistics, J. Stat. Comput. Simul., 88 (2018), 1882–1902. https://doi.org/10.1080/00949655.2017.1374387 doi: 10.1080/00949655.2017.1374387
    [56] R. Dahlhaus, On the Kullback-Leibler information divergence of locally stationary processes, Stochastic Process. Appl., 62 (1996), 139–168. https://doi.org/10.1016/0304-4149(95)00090-9 doi: 10.1016/0304-4149(95)00090-9
    [57] R. Dahlhaus, Fitting time series models to non-stationary processes, Ann. Statist., 25 (1997), 1–37. https://doi.org/10.1214/aos/1034276620 doi: 10.1214/aos/1034276620
    [58] R. Dahlhaus, W. Polonik, Nonparametric quasi-maximum likelihood estimation for Gaussian locally stationary processes, Ann. Statist., 34 (2006), 2790–2824. https://doi.org/10.1214/009053606000000867 doi: 10.1214/009053606000000867
    [59] R. Dahlhaus, W. Polonik, Empirical spectral processes for locally stationary time series, Bernoulli, 15 (2009), 1–39. https://doi.org/10.3150/08-BEJ137 doi: 10.3150/08-BEJ137
    [60] S. Datta, D. Bandyopadhyay, G. A. Satten, Inverse probability of censoring weighted U-statistics for right-censored data with an application to testing hypotheses, Scand. J. Stat., 37 (2010), 680–700. https://doi.org/10.1111/j.1467-9469.2010.00697.x doi: 10.1111/j.1467-9469.2010.00697.x
    [61] J. A. Davydov, Convergence of distributions generated by stationary stochastic processes, Theory Probab. Appl., 13 (1968), 691–696. https://doi.org/10.1137/1113086 doi: 10.1137/1113086
    [62] J. A. Davydov, Mixing conditions for Markov chains, Theory Probab. Appl., 18 (1973), 312–328. https://doi.org/10.1137/1118033 doi: 10.1137/1118033
    [63] V. H. de la Peña, Decoupling and Khintchine's inequalities for U-statistics, Ann. Probab., 20 (1992), 1877–1892.
    [64] V. H. de la Peña, E. Giné, Decoupling, In: Probability and its Applications, New York: Springer, 1999.
    [65] P. Deheuvels, One bootstrap suffices to generate sharp uniform bounds in functional estimation, Kybernetika, 47 (2011), 855–865.
    [66] M. Denker, G. Keller, On U-statistics and v. Mises' statistics for weakly dependent processes, Z. Wahrsch. Verw. Gebiete, 64 (1983), 505–522. https://doi.org/10.1007/BF00534953 doi: 10.1007/BF00534953
    [67] L. Devroye, G. Lugosi, Combinatorial Methods in Density Estimation, New York: Springer, 2001.
    [68] S. Didi, S. Bouzebda, Wavelet density and regression estimators for continuous time functional stationary and ergodic processes, Mathematics, 10 (2022), 4356. https://doi.org/10.3390/math10224356 doi: 10.3390/math10224356
    [69] S. Didi, A. Al Harby, S. Bouzebda, Wavelet density and regression estimators for functional stationary and ergodic data: discrete time, Mathematics, 10 (2022), 3433. https://doi.org/10.3390/math10193433 doi: 10.3390/math10193433
    [70] R. M. Dudley, The sizes of compact subsets of Hilbert space and continuity of Gaussian processes, J. Funct. Anal., 1 (1967), 290–330.
    [71] R. M. Dudley, An extended Wichura theorem, definitions of Donsker class, and weighted empirical distributions, In: Probability in Banach Spaces V. Lecture Notes in Mathematics, Springer, 1153 (1985), 141–178. https://doi.org/10.1007/BFb0074949
    [72] R. M. Dudley, Uniform central limit theorems, In: Cambridge Studies in Advanced Mathematics, 2 Eds., New York: Cambridge University Press, 2014.
    [73] E. Eberlein, Weak convergence of partial sums of absolutely regular sequences, Statist. Probab. Lett., 2 (1984), 291–293. https://doi.org/10.1016/0167-7152(84)90067-1 doi: 10.1016/0167-7152(84)90067-1
    [74] P. P. B. Eggermont, V. N. LaRiccia, Maximum Penalized Likelihood Estimation, New York: Springer, 2001.
    [75] L. Faivishevsky, J. Goldberger, ICA based on a smooth estimation of the differential entropy, In: Advances in Neural Information Processing Systems, Inc: Curran Associates, 2008.
    [76] S. Feng, P. Tian, Y. Hu, G. Li, Estimation in functional single-index varying coefficient model, J. Statist. Plann. Inference, 214 (2021), 62–75. https://doi.org/10.1016/j.jspi.2021.01.003 doi: 10.1016/j.jspi.2021.01.003
    [77] F. Ferraty, P. Vieu, Nonparametric models for functional data, with application in regression, time-series prediction and curve discrimination, In: The International Conference on Recent Trends and Directions in Nonparametric Statistics, J. Nonparametr. Stat., 16 (2004), 111–125. https://doi.org/10.1080/10485250310001622686 doi: 10.1080/10485250310001622686
    [78] F. Ferraty, P. Vieu, Nonparametric Functional Data Analysis, New York: Springer, 2006.
    [79] F. Ferraty, A. Peuch, P. Vieu, Modèle à indice fonctionnel simple, C. R. Math. Acad. Sci. Paris, 336 (2003), 1025–1028. https://doi.org/10.1016/S1631-073X(03)00239-5 doi: 10.1016/S1631-073X(03)00239-5
    [80] F. Ferraty, A. Laksaci, P. Vieu, Estimating some characteristics of the conditional distribution in nonparametric functional models, Stat. Infer. Stoch. Process., 9 (2006), 47–76. https://doi.org/10.1007/s11203-004-3561-3 doi: 10.1007/s11203-004-3561-3
    [81] F. Ferraty, A. Mas, P. Vieu, Nonparametric regression on functional data: inference and practical aspects, Aust. N.Z. J. Stat., 49 (2007), 267–286. https://doi.org/10.1111/j.1467-842X.2007.00480.x doi: 10.1111/j.1467-842X.2007.00480.x
    [82] F. Ferraty, A. Laksaci, A. Tadj, P. Vieu, Rate of uniform consistency for nonparametric estimates with functional variables, J. Statist. Plann. Inference, 140 (2010), 335–352. https://doi.org/10.1016/j.jspi.2009.07.019 doi: 10.1016/j.jspi.2009.07.019
    [83] F. Ferraty, N. Kudraszow, P. Vieu, Nonparametric estimation of a surrogate density function in infinite-dimensional spaces, J. Nonparametr. Stat., 24 (2012), 447–464. https://doi.org/10.1080/10485252.2012.671943 doi: 10.1080/10485252.2012.671943
    [84] A. Földes, L. Rejtő, A LIL type result for the product limit estimator, Z. Wahrsch. Verw. Gebiete, 56 (1981), 75–86. https://doi.org/10.1007/BF00531975 doi: 10.1007/BF00531975
    [85] E. W. Frees, Infinite order U-statistics, Scand. J. Statist., 16 (1989), 29–45.
    [86] K. A. Fu, An application of U-statistics to nonparametric functional data analysis, Commun. Stat. Theory Meth., 41 (2012), 1532–1542. https://doi.org/10.1080/03610926.2010.526747 doi: 10.1080/03610926.2010.526747
    [87] T. Gasser, P. Hall, B. Presnell, Nonparametric estimation of the mode of a distribution of random curves, J. R. Stat. Soc. Ser. B Stat. Methodol., 60 (1998), 681–691. https://doi.org/10.1111/1467-9868.00148 doi: 10.1111/1467-9868.00148
    [88] S. Ghosal, A. Sen, A. W. van der Vaart, Testing monotonicity of regression, Ann. Statist., 28 (2000), 1054–1082.
    [89] E. Giné, J. Zinn, Some limit theorems for empirical processes, Ann. Probab., 12 (1984), 929–998.
    [90] A. Goia, P. Vieu, An introduction to recent advances in high/infinite dimensional statistics, J. Multivar. Anal., 146 (2016), 1–6. https://doi.org/10.1016/j.jmva.2015.12.001 doi: 10.1016/j.jmva.2015.12.001
    [91] L. Gu, L. Yang, Oracally efficient estimation for single-index link function with simultaneous confidence band, Electron. J. Stat., 9 (2015), 1540–1561. https://doi.org/10.1214/15-EJS1051 doi: 10.1214/15-EJS1051
    [92] P. Hall, Asymptotic properties of integrated square error and cross-validation for kernel estimation of a regression function, Z. Wahrsch. Verw. Gebiete, 67 (1984), 175–196. https://doi.org/10.1007/BF00535267 doi: 10.1007/BF00535267
    [93] P. R. Halmos, The theory of unbiased estimation, Ann. Math. Statist,, 17 (1946), 34–43. https://doi.org/10.1214/aoms/1177731020
    [94] F. Han, T. Qian, On inference validity of weighted U-statistics under data heterogeneity, Electron. J. Statist., 12 (2018), 2637–2708. https://doi.org/10.1214/18-EJS1462 doi: 10.1214/18-EJS1462
    [95] W. Härdle, Applied nonparametric regression, In: Econometric Society Monographs, Cambridge: Cambridge University Press, 1990.
    [96] W. Härdle, J. S. Marron, Optimal bandwidth selection in nonparametric regression function estimation, Ann. Statist., 13 (1985), 1465–1481.
    [97] M. Harel, M. L. Puri, Conditional U-statistics for dependent random variables, J. Multivar. Anal., 57 (1996), 84–100. https://doi.org/10.1006/jmva.1996.0023 doi: 10.1006/jmva.1996.0023
    [98] C. Heilig, D. Nolan, Limit theorems for the infinite-degree U-process, Statist. Sinica, 11 (2001), 289–302.
    [99] W. Hoeffding, A class of statistics with asymptotically normal distribution, Ann. Math. Stat., 19 (1948), 293–325.
    [100] J. Hoffmann-Jørgensen, Stochastic processes on Polish spaces, In: Various Publications Series, Aarhus: Aarhus Universitet, Matematisk Institut, 1991.
    [101] M. Hollander, F. Proschan, Testing whether new is better than used, Ann. Math. Statist., 43 (1972), 1136–1146. https://doi.org/10.1214/aoms/1177692466 doi: 10.1214/aoms/1177692466
    [102] L. Horváth, P. Kokoszka, Inference for Functional Data with Applications, New York: Springer, 2012.
    [103] I. A. Ibragimov, V. N. Solev, A certain condition for the regularity of Gaussian stationary sequence, Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 12 (1969), 113–125.
    [104] S. Jadhav, S. Ma, An association test for functional data based on Kendall's tau, J. Multivar. Anal., 184 (2021), 104740. https://doi.org/10.1016/j.jmva.2021.104740 doi: 10.1016/j.jmva.2021.104740
    [105] C. Jentsch, S. Subba Rao, A test for second order stationarity of a multivariate time series, J. Econometrics, 185 (2015), 124–161. https://doi.org/10.1016/j.jeconom.2014.09.010 doi: 10.1016/j.jeconom.2014.09.010
    [106] Z. Jiang, Z. Huang, J. Zhang, Functional single-index composite quantile regression, Metrika, 86 (2023), 595–603. https://doi.org/10.1007/s00184-022-00887-w doi: 10.1007/s00184-022-00887-w
    [107] R. Jin, S. Wang, Y. Zhou, Regularized distance metric learning: theory and algorithm, In: Advances in Neural Information Processing Systems, Inc: Curran Associates, 2009.
    [108] E. L. Kaplan, P. Meier, Nonparametric estimation from incomplete observations, J. Amer. Statist. Assoc., 53 (1958), 457–481.
    [109] L. Kara-Zaitri, A. Laksaci, M. Rachdi, P. Vieu, Uniform in bandwidth consistency for various kernel estimators involving functional data, J. Nonparametr. Stat., 29 (2017), 85–107. https://doi.org/10.1080/10485252.2016.1254780 doi: 10.1080/10485252.2016.1254780
    [110] M. G. Kendall, A new measure of rank correlation, Biometrika, 30 (1938), 81–93.
    [111] M. Kohler, K. Máthé, M. Pintér, Prediction from randomly right censored data, J. Multivar. Anal., 80 (2002), 73–100. https://doi.org/10.1006/jmva.2000.1973 doi: 10.1006/jmva.2000.1973
    [112] A. N. Kolmogorov, V. M. Tihomirov, ε-entropy and ε-capacity of sets in function spaces, Uspehi. Mat. Nauk., 14 (1959), 3–86.
    [113] V. S. Koroljuk, Y. V. Borovskich, Theory of U-statistics, In: Mathematics and its Applications, Dordrecht: Kluwer Academic Publishers Group, 1994.
    [114] M. R. Kosorok, Introduction to Empirical Processes and Semiparametric Inference, New York: Springer, 2008.
    [115] J. P. Kreiss, E. Paparoditis, Bootstrapping locally stationary processes, J. R. Stat. Soc. Ser. B. Stat. Methodol., 77 (2015), 267–290. https://doi.org/10.1111/rssb.12068 doi: 10.1111/rssb.12068
    [116] D. Kurisu, Nonparametric regression for locally stationary functional time series, Electron. J. Statist., 16 (2022), 3973–3995. https://doi.org/10.1214/22-EJS2041 doi: 10.1214/22-EJS2041
    [117] A. J. Lee, U-statistics, In: Statistics: Textbooks and Monographs, New York: Marcel Dekker, 1990.
    [118] S. Lee, O. Linton, Y. J. Whang, Testing for stochastic monotonicity, Econometrica, 77 (2009), 585–602. https://doi.org/10.3982/ECTA7145 doi: 10.3982/ECTA7145
    [119] A. Leucht, Degenerate U- and V-statistics under weak dependence: asymptotic theory and bootstrap consistency, Bernoulli, 18 (2012), 552–585. https://doi.org/10.3150/11-BEJ354 doi: 10.3150/11-BEJ354
    [120] A. Leucht, M. H. Neumann, Degenerate U- and V-statistics under ergodicity: asymptotics, bootstrap and applications in statistics, Ann. Inst. Stat. Math., 65 (2013), 349–386. https://doi.org/10.1007/s10463-012-0374-9 doi: 10.1007/s10463-012-0374-9
    [121] J. Li, C. Huang, Z. Hongtu, A functional varying-coefficient single-index model for functional response data, J. Amer. Stat. Assoc., 112 (2017), 1169–1181. https://doi.org/10.1080/01621459.2016.1195742 doi: 10.1080/01621459.2016.1195742
    [122] W. V. Li, Q. M. Shao, Gaussian processes: inequalities, small-ball probabilities and applications, Handbook Stat., 19 (2001), 533–597. https://doi.org/10.1016/S0169-7161(01)19019-X doi: 10.1016/S0169-7161(01)19019-X
    [123] H. Liang, X. Liu, R. Li, C. L. Tsai, Estimation and testing for partially linear single-index models, Ann. Stat., 38 (2010), 3811–3836. https://doi.org/10.1214/10-AOS835 doi: 10.1214/10-AOS835
    [124] E. Liebscher, Strong convergence of sums of α-mixing random variables with applications to density estimation, Stochast. Process. Appl., 65 (1996), 69–80. https://doi.org/10.1016/S0304-4149(96)00096-8 doi: 10.1016/S0304-4149(96)00096-8
    [125] F. Lim, V. M. Stojanovic, On U-statistics and compressed sensing I: non-asymptotic average-case analysis, IEEE T. Signal Process., 61 (2013), 2473–2485. https://doi.org/10.1109/TSP.2013.2247598 doi: 10.1109/TSP.2013.2247598
    [126] N. Ling, P. Vieu, Nonparametric modelling for functional data: selected survey and tracks for future, Statistics, 52 (2018), 934–949. https://doi.org/10.1080/02331888.2018.1487120 doi: 10.1080/02331888.2018.1487120
    [127] N. Ling, L. Cheng, P. Vieu, Single functional index model under responses MAR and dependent observations, In: Functional and High-Dimensional Statistics and Related Fields. IWFOS 2020. Contributions to Statistics. Springer, Cham., 2020.
    [128] N. Ling, L. Cheng, P. Vieu, H. Ding, Missing responses at random in functional single index model for time series data, Stat. Papers, 63 (2022), 665–692. https://doi.org/10.1007/s00362-021-01251-2 doi: 10.1007/s00362-021-01251-2
    [129] Q. Liu, J. Lee, M. Jordan, A kernelized stein discrepancy for goodness-of-fit tests, In: Proceedings of The 33rd International Conference on Machine Learning, PMLR, 48 (2016), 276–284.
    [130] B. Maillot, V. Viallon, Uniform limit laws of the logarithm for nonparametric estimators of the regression function in presence of censored data, Math. Meth. Stat., 18 (2009), 159–184. https://doi.org/10.3103/S1066530709020045 doi: 10.3103/S1066530709020045
    [131] T. Masak, S. Sarkar, V. M. Panaretos, Principal separable component analysis via the partial inner product, Stat. Theory, 2020.
    [132] D. M. Mason, Proving consistency of non-standard kernel estimators, Stat. Inference Stoch. Process., 15 (2012), 151–176. https://doi.org/10.1007/s11203-012-9068-4 doi: 10.1007/s11203-012-9068-4
    [133] E. Masry, Nonparametric regression estimation for dependent functional data: asymptotic normality, Stochast. Process. Appl., 115 (2005), 155–177. https://doi.org/10.1016/j.spa.2004.07.006 doi: 10.1016/j.spa.2004.07.006
    [134] U. Mayer, H. Zähle, Z. Zhou, Functional weak limit theorem for a local empirical process of non-stationary time series and its application, Bernoulli, 26 (2020), 1891–1911. https://doi.org/10.3150/19-BEJ1174 doi: 10.3150/19-BEJ1174
    [135] E. Mayer-Wolf, O. Zeitouni, The probability of small Gaussian ellipsoids and associated conditional moments, Ann. Probab., 21 (1993), 14–24.
    [136] M. Mohammedi, S. Bouzebda, A. Laksaci, The consistency and asymptotic normality of the kernel type expectile regression estimator for functional data, J. Multivar. Anal., 181 (2021), 104673. https://doi.org/10.1016/j.jmva.2020.104673 doi: 10.1016/j.jmva.2020.104673
    [137] M. Mohammedi, S. Bouzebda, A. Laksaci, O. Bouanani, Asymptotic normality of the k-NN single index regression estimator for functional weak dependence data, Commun. Stat. Theory Meth., 53 (2024), 3143–3168. https://doi.org/10.1080/03610926.2022.2150823 doi: 10.1080/03610926.2022.2150823
    [138] J. S. Morris, Functional regression, Annu. Rev. Stat. Appl., 2 (2015), 321–359. https://doi.org/10.1146/annurev-statistics-010814-020413
    [139] E. A. Nadaraja, On a regression estimate, Teor. Verojatnost. Primenen., 9 (1964), 157–159.
    [140] E. A. Nadaraya, Nonparametric estimation of probability densities and regression curves, In: Mathematics and its Applications (Soviet Series), Dordrecht: Kluwer Academic Publishers Group, 1989.
    [141] G. P. Nason, R. von Sachs, G. Kroisandt, Wavelet processes and adaptive estimation of the evolutionary wavelet spectrum, J. R. Stat. Soc. Ser. B Stat. Methodol., 62 (2000), 271–292. https://doi.org/10.1111/1467-9868.00231 doi: 10.1111/1467-9868.00231
    [142] M. H. Neumann, R. von Sachs, Wavelet thresholding in anisotropic function classes and application to adaptive estimation of evolutionary spectra, Ann. Statist., 25 (1997), 38–76. https://doi.org/10.1214/aos/1034276621 doi: 10.1214/aos/1034276621
    [143] Y. Nie, L. Wang, J. Cao, Estimating functional single index models with compact support, Environmetrics, 34 (2023), e2784. https://doi.org/10.1002/env.2784 doi: 10.1002/env.2784
    [144] D. Nolan, D. Pollard, U-processes: rates of convergence, Ann. Statist., 15 (1987), 780–799.
    [145] S. Novo, G. Aneiros, P. Vieu, Automatic and location-adaptive estimation in functional single-index regression, J. Nonparametr. Stat., 31 (2019), 364–392. https://doi.org/10.1080/10485252.2019.1567726 doi: 10.1080/10485252.2019.1567726
    [146] W. Peng, T. Coleman, L. Mentch, Rates of convergence for random forests via generalized U-statistics, Electron. J. Stat., 16 (2022), 232–292. https://doi.org/10.1214/21-EJS1958 doi: 10.1214/21-EJS1958
    [147] N. Phandoidaen, S. Richter, Empirical process theory for locally stationary processes, Bernoulli, 28 (2022), 453–480. https://doi.org/10.3150/21-BEJ1351 doi: 10.3150/21-BEJ1351
    [148] B. L. S. Prakasa Rao, A. Sen, Limit distributions of conditional U-statistics, J. Theoret. Probab., 8 (1995), 261–301. https://doi.org/10.1007/BF02212880 doi: 10.1007/BF02212880
    [149] M. B. Priestley, Evolutionary spectra and non-stationary processes, J. Roy. Statist. Soc. Ser. B, 27 (1965), 204–237. https://doi.org/10.1111/j.2517-6161.1965.tb01488.x doi: 10.1111/j.2517-6161.1965.tb01488.x
    [150] M. Rachdi, P. Vieu, Nonparametric regression for functional data: automatic smoothing parameter selection, J. Statist. Plann. Inference, 137 (2007), 2784–2801. https://doi.org/10.1016/j.jspi.2006.10.001 doi: 10.1016/j.jspi.2006.10.001
    [151] J. O. Ramsay, B. W. Silverman, Applied Functional Data Analysis, New York: Springer, 2002.
    [152] G. Rempala, A. Gupta, Weak limits of U-statistics of infinite order, Random Oper. Stoch. Equ., 7 (1999), 39–52. https://doi.org/10.1515/rose.1999.7.1.39 doi: 10.1515/rose.1999.7.1.39
    [153] K. Sakiyama, M. Taniguchi, Discriminant analysis for locally stationary processes, J. Multivar. Anal., 90 (2004), 282–300. https://doi.org/10.1016/j.jmva.2003.08.002 doi: 10.1016/j.jmva.2003.08.002
    [154] A. Sen, Uniform strong consistency rates for conditional U-statistics, Sankhyā Ind. J. Stat. Ser. A, 56 (1994), 179–194.
    [155] R. J. Serfling, Approximation Theorems of Mathematical Statistics, New York: John Wiley & Sons, 1980.
    [156] H. L. Shang, Bayesian bandwidth estimation for a functional nonparametric regression model with mixed types of regressors and unknown error density, J. Nonparametr. Stat., 26 (2014), 599–615. https://doi.org/10.1080/10485252.2014.916806 doi: 10.1080/10485252.2014.916806
    [157] R. P. Sherman, The limiting distribution of the maximum rank correlation estimator, Econometrica, 61 (1993), 123–137.
    [158] R. P. Sherman, Maximal inequalities for degenerate U-processes with applications to optimization estimators, Ann. Statist., 22 (1994), 439–459. https://doi.org/10.1214/aos/1176325377 doi: 10.1214/aos/1176325377
    [159] B. W. Silverman, Distances on circles, toruses and spheres, J. Appl. Probab., 15 (1978), 136–143. https://doi.org/10.2307/3213243
    [160] B. W. Silverman, Density Estimation for Statistics and Data Analysis, London: Chapman & Hall, 1986.
    [161] R. A. Silverman, Locally stationary random processes, IRE T. Inform. Theory, 3 (1957), 182–187. https://doi.org/10.1109/TIT.1957.1057413 doi: 10.1109/TIT.1957.1057413
    [162] Y. Song, X. Chen, K. Kato, Approximating high-dimensional infinite-order U-statistics: statistical and computational guarantees, Electron. J. Stat., 13 (2019), 4794–4848. https://doi.org/10.1214/19-EJS1643 doi: 10.1214/19-EJS1643
    [163] I. Soukarieh, S. Bouzebda, Exchangeably weighted bootstraps of general Markov U-process, Mathematics, 10 (2022), 3745. https://doi.org/10.3390/math10203745 doi: 10.3390/math10203745
    [164] I. Soukarieh, S. Bouzebda, Renewal type bootstrap for increasing degree U-process of a Markov chain, J. Multivar. Anal., 195 (2023), 105143. https://doi.org/10.1016/j.jmva.2022.105143 doi: 10.1016/j.jmva.2022.105143
    [165] I. Soukarieh, S. Bouzebda, Weak convergence of the conditional U-statistics for locally stationary functional time series, Stat. Inference Stoch. Process., 17 (2024), 227–304. https://doi.org/10.1007/s11203-023-09305-y doi: 10.1007/s11203-023-09305-y
    [166] W. Stute, Conditional U-statistics, Ann. Probab., 19 (1991), 812–825.
    [167] W. Stute, Lp-convergence of conditional U-statistics, J. Multivar. Anal., 51 (1994), 71–82. https://doi.org/10.1006/jmva.1994.1050 doi: 10.1006/jmva.1994.1050
    [168] W. Stute, Universally consistent conditional U-statistics, Ann. Statist., 22 (1994), 460–473. https://doi.org/10.1214/aos/1176325378 doi: 10.1214/aos/1176325378
    [169] W. Stute, Symmetrized NN-conditional U-statistics. In: Research Developments in Probability and Statistics, 231–237, 1996.
    [170] W. Stute, W. and Wang, Multi-sample U-statistics for censored data, Scand. J. Statist., 20 (1993), 369–374.
    [171] W. Stute, L. X. Zhu, Nonparametric checks for single-index models, Ann. Statist., 33 (2005), https://doi.org/10.1214/009053605000000020 1048–1083.
    [172] K. K. Sudheesh, S. Anjana, M. Xie, U-statistics for left truncated and right censored data, Statistics, 57 (2023), 900–917. https://doi.org/10.1080/02331888.2023.2217314 doi: 10.1080/02331888.2023.2217314
    [173] Q. Tang, L. Kong, D. Rupper, R. J. Karunamuni, Partial functional partially linear single-index models, Statist. Sinica, 31 (2021), 107–133.
    [174] W. Y. Tsai, N. P. Jewell, M. C. Wang, A note on the product-limit estimator under right censoring and left truncation, Biometrika, 74 (1987), 883–886. https://doi.org/10.1093/biomet/74.4.883 doi: 10.1093/biomet/74.4.883
    [175] A. van Delft, H. Dette, A general framework to quantify deviations from structural assumptions in the analysis of non-stationary function-valued processes, preprint paper, 2022. https://doi.org/10.48550/arXiv.2208.10158
    [176] A. van Delft, M. Eichler, Locally stationary functional time series, Electron. J. Stat., 12 (2018), 107–170. https://doi.org/10.1214/17-EJS1384 doi: 10.1214/17-EJS1384
    [177] A. van der Vaart, New donsker classes, Ann. Probab., 24 (1996), 2128–2140. https://doi.org/10.1214/aop/1041903221
    [178] A. W. van der Vaart, J. A. Wellner, Weak Convergence and Empirical Processes, New York: Springer, 1996.
    [179] M. Vogt, Nonparametric regression for locally stationary time series, Ann. Statist., 40 (2012), 2601–2633. https://doi.org/10.1214/12-AOS1043 doi: 10.1214/12-AOS1043
    [180] V. A. Volkonskiui, Y. A. Rozanov, Some limit theorems for random functions I, Theory Probab. Appl., 4 (1959), 178–197. https://doi.org/10.1137/1104015 doi: 10.1137/1104015
    [181] R. von Mises, On the asymptotic distribution of differentiable statistical functions, Ann. Math. Stat., 18 (1947), 309–348.
    [182] M. P. Wand, M. C. Jones, Kernel smoothing, In: Monographs on Statistics and Applied Probability, London: Chapman and Hall, 1995.
    [183] J. L. Wang, J. M. Chiou, H. G. Müller, Functional data analysis, Annu. Rev. Stat. Appl., 3 (2016), 257–295. https://doi.org/10.1146/annurev-statistics-041715-033624 doi: 10.1146/annurev-statistics-041715-033624
    [184] G. S. Watson, Smooth regression analysis, Sankhyā Ind. J. Stat. Ser. A, 26 (1964), 359–372.
    [185] J. Yang, Z. Zhou, Spectral inference under complex temporal dynamics, J. Amer. Statist. Assoc., 117 (2022), 133–155. https://doi.org/10.1080/01621459.2020.1764365 doi: 10.1080/01621459.2020.1764365
    [186] A. Yuan, M. Giurcanu, G. Luta, M. T. Tan, U-statistics with conditional kernels for incomplete data models, Ann. Inst. Statist. Math., 69 (2017), 271–302. https://doi.org/10.1007/s10463-015-0537-6 doi: 10.1007/s10463-015-0537-6
    [187] Y. Zhou, P. S. F. Yip, A strong representation of the product-limit estimator for left truncated and right censored data, J. Multivar. Anal., 69 (1999), 261–280. https://doi.org/10.1006/jmva.1998.1806 doi: 10.1006/jmva.1998.1806
    [188] H. Zhu, R. Zhang, Y. Liu, H. Ding, Robust estimation for a general functional single index model via quantile regression, J. Korean Stat. Soc., 51 (2022), 1041–1070. https://doi.org/10.1007/s42952-022-00174-4 doi: 10.1007/s42952-022-00174-4
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1502) PDF downloads(87) Cited by(12)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog