Processing math: 55%
Research article Special Issues

Quantum Hermite-Hadamard type inequalities for generalized strongly preinvex functions

  • Received: 20 March 2021 Accepted: 07 September 2021 Published: 16 September 2021
  • MSC : 26A51, 26A33, 26D07, 26D10, 26D15

  • In accordance with the quantum calculus, the quantum Hermite-Hadamard type inequalities shown in recent findings provide improvements to quantum Hermite-Hadamard type inequalities. We acquire a new qκ1-integral and qκ2-integral identities, then employing these identities, we establish new quantum Hermite-Hadamard qκ1-integral and qκ2-integral type inequalities through generalized higher-order strongly preinvex and quasi-preinvex functions. The claim of our study has been graphically supported, and some special cases are provided as well. Finally, we present a comprehensive application of the newly obtained key results. Our outcomes from these new generalizations can be applied to evaluate several mathematical problems relating to applications in the real world. These new results are significant for improving integrated symmetrical function approximations or functions of some symmetry degree.

    Citation: Humaira Kalsoom, Muhammad Amer Latif, Muhammad Idrees, Muhammad Arif, Zabidin Salleh. Quantum Hermite-Hadamard type inequalities for generalized strongly preinvex functions[J]. AIMS Mathematics, 2021, 6(12): 13291-13310. doi: 10.3934/math.2021769

    Related Papers:

    [1] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [2] Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia . An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969. doi: 10.3934/math.2025365
    [3] Ayed M. Alrashdi, Masad A. Alrasheedi . Square-root lasso under correlated regressors: Tight statistical analysis with a wireless communications application. AIMS Mathematics, 2024, 9(11): 32872-32903. doi: 10.3934/math.20241573
    [4] Bing Xue, Jiakang Du, Hongchun Sun, Yiju Wang . A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem. AIMS Mathematics, 2022, 7(6): 10513-10533. doi: 10.3934/math.2022586
    [5] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [6] Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718
    [7] Andrew Calcan, Scott B. Lindstrom . The ADMM algorithm for audio signal recovery and performance modification with the dual Douglas-Rachford dynamical system. AIMS Mathematics, 2024, 9(6): 14640-14657. doi: 10.3934/math.2024712
    [8] Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084
    [9] Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of G-nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102
    [10] Julian Osorio, Carlos Trujillo, Diego Ruiz . Construction of a cryptographic function based on Bose-type Sidon sets. AIMS Mathematics, 2024, 9(7): 17590-17605. doi: 10.3934/math.2024855
  • In accordance with the quantum calculus, the quantum Hermite-Hadamard type inequalities shown in recent findings provide improvements to quantum Hermite-Hadamard type inequalities. We acquire a new qκ1-integral and qκ2-integral identities, then employing these identities, we establish new quantum Hermite-Hadamard qκ1-integral and qκ2-integral type inequalities through generalized higher-order strongly preinvex and quasi-preinvex functions. The claim of our study has been graphically supported, and some special cases are provided as well. Finally, we present a comprehensive application of the newly obtained key results. Our outcomes from these new generalizations can be applied to evaluate several mathematical problems relating to applications in the real world. These new results are significant for improving integrated symmetrical function approximations or functions of some symmetry degree.



    One of the important problems in signal processing is to recover an unknown sparse signal from a few measurements, which can be expressed as follows:

    min    12yAx22s.t.    x0s, (1.1)

    where x0 represents the total number of nonzero entries of xRN, ARm×N is the measurement matrix with mN, and y is the measurement vector. This model has been widely applied in many important areas, including machine learning, compressed sensing, signal processing, pattern recognition, wireless communication, etc. Note that the l0-norm is not continuous and has combinatorial structure. Hence, the problem (1.1) is known to be NP-hard.

    Over the past decades, many efficient algorithms have been proposed for solving the model (1.1), including convex relaxation methods, greedy methods, and thresholding-based methods, to name a few. For example, basis pursuit [1,2], lp algorithms [3,4], alternating projections [5] and conjugate gradient adaptive filtering [6] have been proposed by using convex optimization techniques. Greedy methods include orthogonal matching pursuit (OMP) [7,8,9], compressive sampling matching pursuit (CoSaMP) [10] and subspace pursuit (SP) [11,12]. Thresholding-based methods provide a simple way to ensure the feasibility of iteration, which includes iterative hard thresholding [13,14,15], hard thresholding pursuit [16,17,18], soft thresholding [19] and Newton-step hard thresholding [20]. Recently, Zhao [21,22] proposed a new thresholding operator called s-optimal thresholding by connecting the s-thresholding directly to the reduction of the objective function. For more information on theoretical analysis and applications of algorithms for solving sparse signal recovery, see [23,24,25,26].

    In this paper, we mainly focus on the hard thresholding algorithms due to their simple structure and low computational cost. One of the key steps in a hard thresholding algorithm is

    xn+1=Hs(xn+AT(yAxn)),

    where Hs() stands for the hard thresholding operator, keeping the s-largest absolute components of the vector and zeroing out the rest of the components. The main aim in this step is to ensure the feasibility of iteration. However, three possible questions exist for this step. First, the objective function can increase in some iterations, i.e., {yAxn2} is not a decreasing sequence. Second, if xn+AT(yAxn) is a dense vector, only keeping the s-part would lose too much important information. Third, even for a non-dense vector, it is possible that some useful indexes may be missed, as the difference between the s-largest components and (s+j)-largest (with j>0) components of xn+AT(yAxn) is very small. To the best of our knowledge, the first question can be solved by using the s-optimal thresholding operator introduced by Zhao in [21], and the second question has been discussed recently in [27] by using partial gradient technology, i.e., full gradients AT(yAxn) are replaced by partial ones Hq(AT(yAxn)) with qs.

    Inspired by these works, we continue to study the aforementioned third question in this paper. Precisely, a little greater signal basis, say β, which is greater than s, is selected in each iteration first. This undoubtedly increases the chance of correctly identifying the support set of a signal, particularly for the case of the s-largest components close to the β-largest components. A β-sparse vector un+1 is attained by solving a least-square problem over the subspace determined by the index set Un+1:=Lβ(xn+AT(yAxn)), where Lβ(x) stands for the index set of the β largest absolute entries of x. Subsequently, the hard thresholding operator Hs() will be applied to un+1 for reconstructing the s-sparse vector. Finally, to further improve numerical performance, we choose an iteration to seek a vector that best fits the measurements over the prescribed support set by using a pursuit step, also referred to as debiasing or orthogonal projection. This leads to the Compressive Hard Thresholding Pursuit (CHTP) algorithm. Numerical experiments demonstrate that CHTP has better performances when β is slightly greater than s.

    The notation used in this paper is standard. For a given set S{1,2,,N}, let us denote by |S| the cardinality of S and by ˉS:={1,2,N}S the complement of S. For a fixed vector x, xS is obtained by retaining the elements of x indexed in S and zeroing out the rest of the elements. The support of x is defined as supp(x):={i|xi0}. Given two sets S1 and S2, the symmetrical difference of S1 and S2 is denoted by S1ΔS2:=(S1S2)(S2S1). Clearly,

    xS1S22+xS2S122xS1ΔS22,    xRN.

    The paper is organized as follows. The CHTP algorithm is proposed in Section 2. Error estimation and convergence analysis of CHTP are given in Section 3. Numerical results are reported in Section 4.

    In order to solve problem (1.1), Blumensath and Davies [13] proposed the following Iterative Hard Thresholding (IHT) algorithm:

    xn+1:=Hs(xn+AT(yAxn)).

    The negative gradient is used as the search direction, and the hard thresholding operator is employed to ensure sparse feasibility. Furthermore, by combining the IHT and CoSaMP algorithms, Foucart [17] proposed an algorithm called Hard Thresholding Pursuit (HTP), which has stronger numerical performance than IHT.

    Algorithm: Hard Thresholding Pursuit (HTP).

    Input: a measurement matrix A, a measurement vector y and a sparsity level s. Perform the following steps:

    S1. Start with an s-sparse x0RN, typically x0=0;

    S2. Repeat

    Un+1:=Ls(xn+AT(yAxn));xn+1:=argmin{yAz2:supp(z)Un+1} (2.1)

    until a stopping criterion is met.

    Output: the s-sparse vector x.

    In Step (2.1) of HTP, we get |Un+1|=s, which results in only up to s positions being taken as the basis at each iteration in the process of restoring vectors. However, it should be noted that the selected basis of s positions could not accurately represent the full gradient information. From the view of theoretical analysis, the less gradient information of the selected basis there is, the more difficult it is for HTP to achieve good numerical performance. In order to further improve the efficiency of HTP, we propose the following CHTP algorithm. It allows us to choose more elements of gradient information at each iteration. In particular, if β=s, then xn+1=un+1 in CHTP. Hence, CHTP reduces to HTP in this special case.

    Algorithm: Compressive Hard Thresholding Pursuit (CHTP).

    Input: a measurement matrix A, a measurement vector y and a sparsity level s. Perform the following steps:

    S1. Start with an s-sparse x0RN, typically x0=0;

    S2. Repeat

    Un+1:=Lβ(xn+AT(yAxn)),un+1:=argmin{yAz2:supp(z)Un+1},Tn+1:=Ls(un+1),xn+1:=argmin{yAz2:supp(z)Tn+1},

    where βs, until a stopping criterion is met.

    Output: the s-sparse vector x.

    The detailed discussion on numerical experiments including special stopping criteria is given in Section 4.

    Theoretical analysis for CHTP is carried out in terms of the concept of the restricted isometry property (RIP) of a measurement matrix. First, recall that for a given matrix A and two positive integers p,q, the norm Apq is defined as

    Apq:=maxxp1Axq.

    Definition 3.1. [1] Let ARm×N be a matrix with m<N. The s-th restricted isometry constant (RIC) of A, denoted by δs, is defined as

    δs:=maxS{1,2,,N},|S|sATSASI22,

    where AS is the submatrix of A created by deleting the columns not in the index set S.

    The s-th restricted isometry constant can be defined equivalently as the smallest number δ>0 such that

    (1δ)x22Ax22(1+δ)x22,    x satisfying x0s. (3.1)

    In particular, if δs<1, we say that the matrix A has the restricted isometry property (RIP) of order s.

    The following inequalities follow from the definition of RIC and play an important role in the theoretical analysis of CHTP.

    Lemma 3.1. [23] Given a vector vRN and a set S{1,2,,N}, one has the following statements.

    (i). ((IATA)v)S2δtv2   if   |Ssupp(v)|t.

    (ii). (ATv)S21+δtv2    if   |S|t.

    The following lemma comes from [28], providing an estimate on the Lipschitz constant of the hard thresholding operator. Moreover, [28,Example 2.3] indicates that the constant (5+1)/2 is tight.

    Lemma 3.2. For any vector zRN and for any s-sparse vector xRN, one has

    xHs(z)25+12(xz)SZ2,

    where S:=supp(x) and Z:=supp(Hs(z)).

    The following result is inspired by [28,Lemma 3.3].

    Lemma 3.3. Let y=Ax+e. Given an s-sparse vector z and βs, define

    V:=Lβ(z+AT(yAz)).

    Then,

    xSV22(δ2s+βxSz2+1+δs+βe2), 

    where S:=Ls(x) and e:=Ax¯S+e.

    Proof. The case of SV holds trivially due to xSV=0. Now, let us consider the remaining case of SV. Define

    ΦSV:=[z+AT(yAz)]SV2   and   ΦVS:=[z+AT(yAz)]VS2. (3.2)

    Since |V|=βs=|S|,

    |SV|=|SSV||VSV|=|VS|. (3.3)

    Meanwhile, from the definition of V, we know that the entries of the vector z+AT(yAz) supported on SV are not greater than the β largest absolute entries, that is,

    (z+AT(yAz))i(z+AT(yAz))j,  for  any iSV, jVS.

    Together with (3.3), this leads to

    ΦSV=(z+AT(yAz))SV2(z+AT(yAz))VS2=ΦVS. (3.4)

    Define

    ΦSΔV:=[(xSz)AT(yAz)]SΔV2.

    Since y=AxS+e with e=AxˉS+e,

    ΦSΔV=((xSz)AT(AxS+eAz))SΔV2=((IATA)(xSz)ATe)SΔV2((IATA)(xSz))SΔV2+(ATe)SΔV2δ2s+βxSz2+1+δs+βe2, (3.5)

    where the last inequality follows from Lemma 3.1 and the fact that

    |supp(xSz)(SΔV)||supp(z)SV|2s+β.

    Due to (SV)(VS)=, we have

    Φ2SΔV=((xSz)AT(yAz))SΔV22=((xSz)AT(yAz))SV22+((xSz)AT(yAz))VS22=xSV(z+AT(yAz))SV22+(z+AT(yAz))VS22=xSV(z+AT(yAz))SV22+Φ2VS. (3.6)

    Let us discuss the following two cases separately.

    Case 1. ΦVS=0. Then, ΦSV=0 by (3.4). It follows from (3.2) that (z+AT(yAz))SV=0. So,

    xSV2=xSV(z+AT(yAz))SV2=ΦSΔVδ2s+βxSz2+1+δs+βe2,

    where the last inequality results from (3.5).

    Case 2. ΦVS>0. Let

    α:=xSV(z+AT(yAz))SV2ΦVS. (3.7)

    It follows from (3.6) that

    ΦSΔV=xSV(z+AT(yAz))SV22+Φ2VS=1+α2ΦVS. (3.8)

    Combining (3.7) and (3.8) yields

    xSV(z+AT(yAz))SV2=αΦVS=α1+α2ΦSΔV.

    Furthermore,

    Φ2SV=(z+AT(yAz))SV22=xSV(xSzAT(yAz))SV22=xSV222xSV,xSV(z+AT(yAz))SV+xSV(z+AT(yAz))SV22xSV222xSV2xSV(z+AT(yAz))SV2+xSV(z+AT(yAz))SV22=xSV222α1+α2ΦSΔVxSV2+α21+α2Φ2SΔV. (3.9)

    On the other hand, it follows from (3.4) and (3.8) that

    Φ2SVΦ2VS=11+α2Φ2SΔV. (3.10)

    Putting (3.9) and (3.10) together yields

    xSV222αΦSΔV1+α2xSV2+(α21)Φ2SΔV1+α20.

    Thus,

    xSV212(2αΦSΔV1+α2+4α2Φ2SΔV1+α24(α21)Φ2SΔV1+α2)=1+α1+α2ΦSΔV2ΦSΔV, (3.11)

    where the last inequality results from

    (1+α1+α2)2=1+2α1+α22.

    Taking into account (3.5) and (3.11) yields

    xSV22ΦSΔV2(δ2s+βxSz2+1+δs+βe2).

    Lemma 3.4. Let v be a vector satisfying supp(v)T and |T|s. For any vector xRN, let y=AxS+e with e=Ax¯S+e and S=Ls(x). If z satisfies

    z=argminz{yAz2:supp(z)T},

    then

    xSz2(xSv)¯T21δ22s+1+δse21δ2s.

    Proof. According to the definition of z, we have

    [AT(yAz)]T=0.

    Hence,

    [AT(AxS+eAz)]T=[ATA(xSz)]T+(ATe)T=0.

    By the triangle inequality and Lemma 3.1, we get

    (xSz)T2=(xSz)T[ATA(xSz)]T(ATe)T2[(IATA)(xSz)]T2+(ATe)T2δ2s(xSz)2+(ATe)T2,

    where the last inequality is due to |supp(xSz)T|2s. This means that

    xSz22=(xSz)¯T22+(xSz)T22(xSz)¯T22+(δ2s(xSz)2+(ATe)T2)2=(xSz)¯T22+δ22s(xSz)22+2δ2s(ATe)T2xSz2+(ATe)T22,

    i.e.,

    (1δ22s)xSz222δ2s(ATe)T2xSz2(xSz)¯T22(ATe)T220.

    This is a quadratic inequality on xSz2. Hence,

    xSz22δ2s(ATe)T2+4δ22s(ATe)T22+4(1δ22s)((xSz)¯T22+(ATe)T22)2(1δ22s)=δ2s(ATe)T2+(1δ22s)(xSz)¯T22+(ATe)T221δ22sδ2s(ATe)T2+1δ22s(xSz)¯T2+(ATe)T21δ22s=(xSz)¯T21δ22s+(ATe)T21δ2s, (3.12)

    where the second inequality comes from the fact that a2+b2a+b for all a,b0. By Lemma 3.1 and |T|s, one has

    (ATe)T21+δse2. (3.13)

    Note that zˉT=vˉT=0. It follows from (3.12) and (3.13) that

    xSz2(xSz)¯T21δ22s+1+δse21δ2s=(xSv)¯T21δ22s+1+δse21δ2s.

    This completes the proof.

    Error bounds and convergence analysis of CHTP are summarized as follows.

    Theorem 3.1. Let y=Ax+e be the measurement of the signal x and e be the measurement error. If

    δ2s+β<14+5+2, (3.14)

    then the iterate {xn} generated by CHTP approximates x with

    xnxS2ρnx0xS2+τ1ρn1ρAx¯S+e2,

    where

    ρ:=δ22s+β(2+(5+1)δ2s+β)(1δ2s+β)(1δ22s)<1, (3.15)
    τ:=2+(5+1)δ2s+β(1δs+β)(1δ22s)+(5+1)1+δβ2(1δs+β)1δ22s+1+δs1δ2s. (3.16)

    Proof. For the convenience of discussion, we define tn+1:=Hs(un+1). Then,

    supp(tn+1)Tn+1=Ls(un+1)Un+1.

    It is clear that (tn+1)Un+1=tn+1. Hence,

    (xStn+1)Un+12=xSUn+1tn+12=xSUn+1Hs(un+1)25+12(xSUn+1un+1)(SUn+1)Tn+125+12(xSun+1)Un+12, (3.17)

    where the first inequality comes from Lemma 3.2, and the second inequality follows from the fact that [(SUn+1)Tn+1]Un+1.

    Since supp(tn+1)Un+1 and supp(un+1)Un+1, (tn+1)¯Un+1=(un+1)¯Un+1=0. Thus,

    xStn+122=(xStn+1)¯Un+122+(xStn+1)Un+122=(xSun+1)¯Un+122+(xStn+1)Un+122,(xSun+1)¯Un+122+(ξ(xSun+1)Un+12)2, (3.18)

    where ξ:=(5+1)/2, and the last step is due to (3.17).

    From Step 2 in CHTP, we know that the following orthogonality condition holds:

    Aun+1y,Az=0,   for anysupp(z)Un+1,

    where y=AxS+e and e' = Ax_{\overline{S}}+e . Hence, for any z satisfying supp(z)\subseteq U^{n+1} , we get

    \begin{eqnarray} &&\langle u^{n+1}-x_S, A^TAz\rangle = \langle Au^{n+1}-Ax_S, Az\rangle \\ && = \langle Au^{n+1}-(y-e'), Az\rangle = \langle Au^{n+1}-y, Az\rangle+\langle e', Az\rangle \\ && = \langle e', Az\rangle = \langle A^Te', z\rangle. \end{eqnarray} (3.19)

    Furthermore,

    \begin{eqnarray} \|(x_S-u^{n+1})_{U^{n+1}}\|_2^2 & = & \langle (u^{n+1}-x_S)_{U^{n+1}}, (u^{n+1}-x_S)_{U^{n+1}}\rangle \\ & = &\langle (u^{n+1}-x_S)_{U^{n+1}}, \big[(I-A^TA)(u^{n+1}-x_S)\big]_{U^{n+1}}\rangle \\ &&+\langle (u^{n+1}-x_S)_{U^{n+1}}, \big[A^TA(u^{n+1}-x_S)\big]_{U^{n+1}}\rangle. \end{eqnarray} (3.20)

    Lemma 3.1 and \left|supp(u^{n+1}-x_S)\cup U^{n+1}\right|\leq s+\beta lead to

    \begin{equation} \langle (u^{n+1}-x_S)_{U^{n+1}}, \big[(I-A^TA)(u^{n+1}-x_S)\big]_{U^{n+1}}\rangle\leq\delta_{s+\beta}\|x_S-u^{n+1}\|_2\|(x_S-u^{n+1})_{U^{n+1}}\|_2. \end{equation} (3.21)

    Note that

    \begin{eqnarray} \langle (u^{n+1}-x_S)_{U^{n+1}}, \big[A^TA(u^{n+1}-x_S)\big]_{U^{n+1}}\rangle & = &\langle (u^{n+1}-x_S)_{U^{n+1}}, A^TA(u^{n+1}-x_S)\rangle \\ & = & \langle u^{n+1}-x_S, A^TA(u^{n+1}-x_S)_{U^{n+1}}\rangle \\ & = & \langle A^Te', (u^{n+1}-x_S)_{U^{n+1}}\rangle \\ & = & \langle (A^Te')_{U^{n+1}}, (u^{n+1}-x_S)_{U^{n+1}}\rangle \\ &\leq & \|(A^Te')_{U^{n+1}}\|_2 \|(u^{n+1}-x_S)_{U^{n+1}}\|_2, \end{eqnarray} (3.22)

    where the third equality comes from (3.19) since supp((u^{n+1}-x_S)_{U^{n+1}})\subset U^{n+1} . Putting (3.20), (3.21) and (3.22) together yields

    \begin{equation} \|(x_S-u^{n+1})_{U^{n+1}}\|_2 \leq \delta_{s+\beta}\|x_S-u^{n+1}\|_2+\|(A^Te')_{U^{n+1}}\|_2. \end{equation} (3.23)

    Therefore,

    \begin{eqnarray*} &&\|x_S-u^{n+1}\|_2^2\\ & = & \|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2^2+\|(x_S-u^{n+1})_{U^{n+1}}\|_2^2\\ &\leq & \|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2^2+\delta_{s+\beta}^2\|x_S-u^{n+1}\|_2^2 +2\delta_{s+\beta}\|(A^Te')_{U^{n+1}}\|_2\|x_S-u^{n+1}\|_2+\|(A^Te')_{U^{n+1}}\|_2^2, \end{eqnarray*}

    i.e.,

    \begin{array}{l} (1-\delta_{s+\beta}^2)\|x_S-u^{n+1}\|_2^2-2\delta_{s+\beta}\|(A^Te')_{U^{n+1}}\|_2\|x_S-u^{n+1}\|_2 -\\ \Big(\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2^2+\|(A^Te')_{U^{n+1}}\|_2^2\Big)\leq0. \end{array}

    This is a quadratic inequality on \|x_S-u^{n+1}\|_2 . So,

    \begin{eqnarray} &&\|x_S-u^{n+1}\|_2 \\ &\leq&\frac{2\delta_{s+\beta}\|(A^Te')_{U^{n+1}}\|_2+\sqrt{4(1-\delta^2_{s+\beta}) \|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2^2+4\|(A^Te')_{U^{n+1}}\|_2^2}}{2(1-\delta_{s+\beta}^2)} \\ &\leq&\frac{\sqrt{1-\delta_{s+\beta}^2}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2+(1+\delta_{s+\beta})\|(A^Te')_{U^{n+1}}\|_2}{1-\delta_{s+\beta}^2} \\ & = &\frac{\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2}{\sqrt{1-\delta_{s+\beta}^2}} +\frac{\|(A^Te')_{U^{n+1}}\|_2}{1-\delta_{s+\beta}}, \end{eqnarray} (3.24)

    where the second inequality comes from the fact that \sqrt{a^2+b^2}\leq a+b for all a, b\geq 0 .

    Due to supp(u^{n+1})\subseteq U^{n+1} , we obtain (u^{n+1})_{\overline{U^{n+1}}} = 0 . According to Lemma 3.3, we have

    \begin{eqnarray} \|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 = \|x_{S\backslash U^{n+1}}\|_2\leq\sqrt{2}\delta_{2s+\beta}\|x_S-x^n\|_2+\sqrt{2(1+\delta_{s+\beta})}\|e'\|_2. \end{eqnarray} (3.25)

    Combining (3.23) and (3.24) yields

    \begin{eqnarray} &&\|(x_S-u^{n+1})_{U^{n+1}}\|_2 \\ &\leq&\frac{\delta_{s+\beta}}{\sqrt{1-\delta_{s+\beta}^2}}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 +\frac{\delta_{s+\beta}}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2+\|(A^Te')_{U^{n+1}}\|_2\\ & = &\frac{\delta_{s+\beta}}{\sqrt{1-\delta_{s+\beta}^2}}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 +\frac{1}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2. \end{eqnarray} (3.26)

    Hence, it follows from (3.18) and (3.26) that

    \begin{eqnarray*} & &\|x_S-t^{n+1}\|_2^2\\ &\leq&\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2^2 +\left(\frac{\xi\delta_{s+\beta}}{\sqrt{1-\delta_{s+\beta}^2}}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 +\frac{\xi}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2\right)^2\nonumber\\ &\leq&\left(\sqrt{\frac{1+\xi\delta_{s+\beta}^2}{1-\delta_{s+\beta}^2}}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 +\frac{\xi}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2\right)^2, \end{eqnarray*}

    where the last step is due to the fact that a^2+(b+c)^2\leq (\sqrt{a^2+b^2}+c)^2 for all a, b, c\geq 0 , and \xi^2-1 = \xi since \xi = (\sqrt{5}+1)/2 . Taking the square root of both sides of this inequality and using (3.25) yields

    \begin{eqnarray} &&\|x_S-t^{n+1}\|_2 \\ &\leq& \sqrt{\frac{1+\xi\delta_{s+\beta}^2}{1-\delta_{s+\beta}^2}}\|(x_S-u^{n+1})_{\overline{U^{n+1}}}\|_2 +\frac{\xi}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2 \\ &\leq&\sqrt{\frac{1+\xi\delta_{s+\beta}^2}{1-\delta_{s+\beta}^2}} \bigg(\sqrt{2}\delta_{2s+\beta}\|x_S-x^n\|_2+\\ &&\sqrt{2(1+\delta_{s+\beta})}\|e'\|_2\bigg) +\frac{\xi}{1-\delta_{s+\beta}}\|(A^Te')_{U^{n+1}}\|_2 \\ &\leq &\sqrt{\frac{2\delta_{2s+\beta}^2\left(1+\xi\delta_{s+\beta}^2\right)}{1-\delta_{s+\beta}^2}}\|x_S-x^n\|_2 +\left(\sqrt{\frac{2(1+\xi\delta_{s+\beta}^2)}{1-\delta_{s+\beta}}}+\frac{\xi\sqrt{1+\delta_\beta}}{1-\delta_{s+\beta}}\right) \|e'\|_2, \end{eqnarray} (3.27)

    where the third inequality follows from the fact that \|(A^Te')_{U^{n+1}}\|_2\leq \sqrt{1+\delta_{\beta}}\|e'\|_2 by Lemma 3.1.

    Note that supp(t^{n+1})\subseteq T^{n+1} , |T^{n+1}|\leq s , and

    x^{n+1} = \arg\min\limits_{z}\bigg\{\|y-Az\|_2:supp(z)\subseteq T^{n+1}\bigg\}.

    According to Lemma 3.4 , we have

    \begin{eqnarray*} \|x^{n+1}-x_S\|_2 &\leq&\frac{1}{\sqrt{1-\delta_{2s}^2}}\|(x_S-t^{n+1})_{\overline{T^{n+1}}}\|_2+\frac{\sqrt{1+\delta_{s}}} {1-\delta_{2s}}\|e'\|_2\\ &\leq& \frac{1}{\sqrt{1-\delta_{2s}^2}}\|x_S-t^{n+1}\|_2+\frac{\sqrt{1+\delta_{s}}} {1-\delta_{2s}}\|e'\|_2. \end{eqnarray*}

    Combining this with (3.27) , we obtain

    \begin{eqnarray*} \|x_S-x^{n+1}\|_2&\leq&\sqrt{\frac{2\delta_{2s+\beta}^2\left(1+\xi\delta_{s+\beta}^2\right)} {(1-\delta_{s+\beta}^2)(1-\delta_{2s}^2)}}\|x_S-x^n\|_2\\ &&+\left(\sqrt{\frac{2(1+\xi\delta_{s+\beta}^2)}{(1-\delta_{s+\beta})(1-\delta_{2s}^2)}} +\frac{\xi\sqrt{1+\delta_\beta}}{(1-\delta_{s+\beta})(\sqrt{1-\delta_{2s}^2})} +\frac{\sqrt{1+\delta_{s}}}{1-\delta_{2s}}\right)\|e'\|_2\\ & = &\rho\|x_S-x^n\|_2+\tau\|e'\|_2, \end{eqnarray*}

    where \rho , \tau are given in (3.15) and (3.16) , respectively. Hence,

    \begin{eqnarray*} \|x_S-x^{n+1}\|_2&\leq&\rho\|x_S-x^n\|_2+\tau\|e'\|_2\\ &\leq&\rho(\rho\|x_S-x^{n-1}\|_2+\tau\|e'\|_2)+\tau\|e'\|_2\\ &\leq&\cdots\cdots\cdots\\ &\leq&\rho^{n+1}\|x_S-x^0\|_2+\tau\frac{1-\rho^{n+1}}{1-\rho}\|e'\|_2. \end{eqnarray*}

    Since \delta_{s+\beta}\leq \delta_{2s+\beta} and \delta_{2s}\leq \delta_{2s+\beta} , it is easy to get

    \rho = \sqrt{\frac{\delta_{2s+\beta}^2\ \ \left(2+(\sqrt{5}+1)\delta_{s+\beta}^2\ \ \right)} {(1-\delta_{s+\beta}^2\ \ )(1-\delta_{2s}^2\ \ )}}\leq \sqrt{\frac{\delta_{2s+\beta}^2\ \ \left(2+(\sqrt{5}+1)\delta_{2s+\beta}^2\ \ \right)} {(1-\delta_{2s+\beta}^2\ \ )(1-\delta_{2s+\beta}^2\ \ )}}.

    To ensure \rho < 1 , it suffices to require the right side of the above equation less than 1 , which is guaranteed by the following RIP bound:

    \delta_{2s+\beta} < \sqrt{\frac{1}{\sqrt{4+\sqrt{5}}+2}}.

    This completes the proof.

    The corresponding result for the noiseless case can be obtained immediately.

    Corollary 3.1. Let y = Ax be the measurement of an s -sparse signal x . If

    \begin{equation*} \delta_{2s+\beta} < \sqrt{\frac{1}{\sqrt{4+\sqrt{5}}+2}}, \end{equation*}

    then the iterative sequence \{x^n\} generated by CHTP approximates x with

    \begin{equation*} \|x^n-x\|_2\leq\rho^n\|x^0-x\|_2, \end{equation*}

    where \rho is given in (3.15).

    In the noiseless setting, the foregoing result shows that a sparse signal can be identified by CHTP in a finite number of iterations.

    Theorem 3.2. If

    \begin{equation*} \delta_{2s+\beta} < \sqrt{\frac{1}{\sqrt{4+\sqrt{5}}+2}}, \end{equation*}

    then any s -sparse vector x\in\mathbb{R}^N can be recovered by CHTP with y = Ax in at most

    n = \left[\frac{ln\left(\frac{\sqrt{2}\ \ \delta_{2s+\beta}\ \ \|x^0-x\|_2}{\theta}\right)}{ln\left(\frac{1}{\rho}\ \ \ \ \right)}\right]+1

    iterations, where \rho is given by (3.15), \theta: = \min\limits_{i\in S}|x_i| and S: = supp(x) .

    Proof. It is sufficient to show that

    \begin{equation} |\big(x^n+A^TA(x-x^n)\big)_k| > |\big(x^n+A^TA(x-x^n)\big)_t|, \ \ \forall k\in S, \ t\in \bar{S}. \end{equation} (3.28)

    If (3.28) holds, we can obtain S\subseteq U^{n+1} from the definition of U^{n+1} , which leads to u^{n+1} = x . Furthermore, we can derive T^{n+1} = S and x^{n+1} = x from the CHTP algorithm directly.

    Next, we aim to prove (3.28). According to Lemma 3.1 and Corollary 3.1, we yield

    \begin{eqnarray*} \label{26} |\big((I-A^TA)(x^n-x)\big)_k|+|\big((I-A^TA)(x^n-x)\big)_t| &\leq& \sqrt{2}\|\big((I-A^TA)(x^n-x)\big)_{\{k, t\}}\|_2\nonumber\\ &\leq& \sqrt{2}\delta_{2s+1}\|x^n-x\|_2\nonumber\\ &\leq& \sqrt{2}\delta_{2s+\beta}\|x^n-x\|_2\nonumber\\ &\leq& \sqrt{2}\delta_{2s+\beta}\rho^{n}\|x^0-x\|_2. \end{eqnarray*}

    It follows that

    \begin{eqnarray} |\big(x^n+A^TA(x-x^n)\big)_k|& = &|x_k+\big((I-A^TA)(x^n-x)\big)_k| \\ &\geq&\theta-|\big((I-A^TA)(x^n-x)\big)_k| \\ &\geq & \theta-\sqrt{2}\delta_{2s+\beta}\rho^{n}\|x^0-x\|_2+|\big((I-A^TA)(x^n-x)\big)_t| \\ & = & \theta-\sqrt{2}\delta_{2s+\beta}\rho^{n}\|x^0-x\|_2+|\big(x^n+A^TA(x-x^n)\big)_t|, \end{eqnarray} (3.29)

    where \theta = \min\limits_{i\in S}|x_i| , and the last step is due to the fact x_t = 0 for t\in \bar{S} . Taking

    n = \left[\frac{ln\left(\frac{\sqrt{2}\ \ \delta_{2s+\beta}\ \ \|x^0-x\|_2}{\theta}\ \ \ \ \right)}{ln\left(\frac{1}{\rho}\right)}\right]+1,

    we get \sqrt{2}\delta_{2s+\beta}\rho^{n}\|x^0-x\|_2 < \theta . Combining this with (3.29), we conclude that (3.28) holds. This completes the proof.

    Now, let us discuss the stability of CHTP. Recall first that the error of the best s -term approximation of a vector x is defined as

    \sigma_{s}(x)_{p}: = \inf \limits_{v}\{\|x-v\|_{p}: \|v\|_{0}\leq s\}.

    In particular, according to [23,Theorem 2.5], we have

    \begin{equation} \sigma_s(z)_2\leq \frac{1}{2\sqrt{s}}\|z\|_1\ \ \ \ \ \forall z. \end{equation} (3.30)

    Theorem 3.3. Let

    \delta_{2s+\beta} < \sqrt{\frac{1}{\sqrt{4+\sqrt{5}}+2}}.

    Then, for all x\in\mathbb{R}^N and e\in \mathbb{R}^m , the iterate \{x^n\} generated by CHTP with y = Ax+e satisfies

    \|x-x^n\|_2\leq \left(\frac{1}{2\sqrt{t}}+\tau\frac{1-\rho^n}{1-\rho}\sqrt{\frac{1+\delta_t}{t}}\right) \sigma_{k}(x)_1+\tau\frac{1-\rho^n}{1-\rho}\|e\|_2 +\rho^n\|x_S-x^0\|_2,

    where \rho is given by (3.15), and

    k: = \left\{\begin{array}{ll} \frac{s}{2} \ \ \ \ \ & \ \ {\rm if} \ s \ \ {\rm is \ even}\\ \left[ \frac{s}{2}\right]+1 & \ \ {\rm if} \ s \ \ {\rm is \ odd} \end{array} \right. \ \ \ {\rm and} \ \ \ t: = s-k.

    Moreover, every cluster point x^* of the sequence \{x^n\} satisfies

    \|x-x^*\|_2\leq \left(\frac{1}{2\sqrt{t}}+\frac{\tau}{1-\rho}\sqrt{\frac{1+\delta_t}{t}}\right) \sigma_{k}(x)_1+ \frac{\tau}{1-\rho}\|e\|_2.

    Proof. Let T_1: = \mathcal{L}_k(x) , T_2: = \mathcal{L}_t(x_{\overline{T_1}}) , and

    T_3: = \mathcal{L}_t(x_{\overline{T_1\cup T_2}}), \cdots, T_{l-1}: = \mathcal{L}_t(x_{\overline{T_1\cup T_2\cup ...T_{l-2}}}), \ \ T_l: = \mathcal{L}_r(x_{\overline{T_1\cup T_2\cup ...T_{l-1}}}),

    where l, r satisfy (l-2)t+k+r = N and r\leq t . According to the structure of T_j , it follows from [23,Lemma 6.10] that

    \begin{equation} \|x_{T_j}\|_2\leq \frac{1}{\sqrt{t}}\|x_{T_{j-1}}\|_1, \ \ \ j = 3, \dots, l. \end{equation} (3.31)

    Taking into account Theorem 3.1, one has

    \begin{eqnarray*} \|x_S-x^n\|_2 &\leq& \rho^n\|x_S-x^0\|_2+\tau\frac{1-\rho^n}{1-\rho}\|Ax_{\overline{S}}+e\|_2 \nonumber \\ &\leq&\rho^n\|x_S-x^0\|_2+\tau\frac{1-\rho^n}{1-\rho}\Big(\|Ax_{T_3}\|_2+\|Ax_{T_4}\|_2+...+\|Ax_{T_l}\|_2+\|e\|_2\Big). \end{eqnarray*}

    It follows from (3.1) and (3.31) that

    \sum\limits_{i = 3}^l\|Ax_{T_i}\|_2\leq\sqrt{1+\delta_t}\sum\limits_{i = 3}^l\|x_{T_i}\|_2 \leq \sqrt{1+\delta_t} \frac{1}{\sqrt{t}}\sum\limits_{i = 2}^{l-1}\|x_{T_i}\|_1\leq \sqrt{1+\delta_t}\frac{1}{\sqrt{t}} \|x_{\overline{T_1}}\|_1.

    Hence,

    \begin{eqnarray} \|x_S-x^n\|_2 &\leq&\rho^n\|x_S-x^0\|_2+\tau\frac{1-\rho^n}{1-\rho} \left(\sqrt{\frac{1+\delta_t}{t}}\|x_{\overline{T_1}}\|_1+\|e\|_2\right) \\ & = &\rho^n\|x_S-x^0\|_2+\tau\frac{1-\rho^n}{1-\rho} \left(\sqrt{\frac{1+\delta_t}{t}}\sigma_{k}(x)_1+\|e\|_2\right). \end{eqnarray} (3.32)

    Using (3.30), we get

    \begin{equation} \|x_{\overline{S}}\|_2 = \sigma_t(x_{\overline{T_1}})_2 \leq \frac{1}{2\sqrt{t}}\|x_{\overline{T_1}}\|_1 = \frac{1}{2\sqrt{t}}\sigma_{k}(x)_1. \end{equation} (3.33)

    Putting (3.32) and (3.33) together implies

    \begin{eqnarray*} \|x-x^n\|_2& = &\|x_{\overline{S}}+x_{S}-x^n\|_2\\ &\leq&\|x_{\overline{S}}\|_2+\|x_S-x^n\|_2\\ &\leq&\frac{1}{2\sqrt{t}}\sigma_{k}(x)_1+\rho^n\|x_S-x^0\|_2+\tau\frac{1-\rho^n}{1-\rho} \big(\sqrt{\frac{1+\delta_t}{t}}\sigma_{k}(x)_1+\|e\|_2\big)\\ & = &\left(\frac{1}{2\sqrt{t}} +\tau\frac{1-\rho^n}{1-\rho}\sqrt{\frac{1+\delta_t}{t}}\right) \sigma_{k}(x)_1+\tau\frac{1-\rho^n}{1-\rho}\|e\|_2+\rho^n\|x_S-x_0\|_2. \end{eqnarray*}

    Since \rho < 1 , taking n\rightarrow \infty in the above inequality yields

    \begin{eqnarray*} \|x-x^*\|_2\leq \left(\frac{1}{2\sqrt{t}}+\frac{\tau}{1-\rho}\sqrt{\frac{1+\delta_t}{t}}\right) \sigma_{k}(x)_1+ \frac{\tau}{1-\rho}\|e\|_2. \end{eqnarray*}

    This completes the proof.

    In this section, numerical experiments are carried out to check the effectiveness of CHTP. All experiments were conducted on a personal computer with the processor Intel(R) Core(TM) i5-7200U, CPU @ 2.50 GHz and 4 GB memory. The s -sparse standard Gaussian vector x^*\in\mathbb{R}^N and standard Gaussian matrix A\in \mathbb{R}^{m\times N} with (m, N) = (400,800) are randomly generated, and the positions of non-zero elements of x^* are uniformly distributed. According to [23,Theorem 9.27], if 0 < \eta, \varepsilon < 1 , and

    m\geq2\eta^{-2}\big[s (1+\ln(N/s))+\ln(2\varepsilon^{-1})\big],

    then the restricted isometry constant \delta_s of \frac{1}{\sqrt{m}}A satisfies

    \delta_s\leq\left[2+\eta g(N, s) \right]\cdot\eta g(N, s),

    with probability greater than or equal to 1-\varepsilon , where

    g(N, s): = 1+[2(1+\ln(N/s))]^{-1/2}.

    Hence, to satisfy the restricted isometry bound (3.14) with high probability, we replace (1.1) by the following model throughout the numerical experiments:

    \min\limits_{x} \Big\{\frac{1}{2}\|\widetilde{y}-\widetilde{A}x\|_2^2:\|x\|_{0}\leq s\Big\},

    where \widetilde{A}: = A/\sqrt{m} , and \widetilde{y}: = y/\sqrt{m} . Moreover, the measurement vector y is given by y = Ax^* in the noiseless setting and y = Ax^*+0.005e in the noisy setting, where e\in\mathbb{R}^m is a standard Gaussian random vector. Unless otherwise stated, all algorithms adopt x^0 = 0 as the initial point and the condition

    \|x^n-x^*\|_2/\|x^*\|_2\leq10^{-3}

    as the successful reconstruction criterion. OMP is performed with s iterations, while other algorithms are performed with at most 50 iterations. For each sparsity level s , 100 random problem instances are tested to investigate the recovery abilities of all algorithms, as shown in Figures 13.

    Figure 1.  Successful recovery of CHTP with different \beta value.
    Figure 2.  Comparison of successful recovery performances for different algorithms.
    Figure 3.  Comparison of the average number of iterations and recovery runtime for different algorithms.

    The first experiment is performed to illustrate the effect of the parameter \beta involved in CHTP on the reconstruction ability. Pick the following different values of \beta in \{s, [1.1s], [1.2s], [1.3s], 2s\} and s\in \{120,123, \ldots, 240\} , where [\cdot] denotes the round function. Numerical results with accurate measurements and inaccurate measurements are shown in Figure 1(a) and 1(b), respectively. Since the results shown in Figure 1(b) are similar to those of Figure 1(a), this indicates that CHTP is stable for weak noise. It is also observed that the recovery ability of CHTP is sensitive to the parameter \beta , and it becomes worse with the increase of \beta as \beta\leq [1.3s] . In particular, CHTP reduces to HTP as \beta = s . Thus, CHTP with \beta = [1.1s] , [1.2s] and [1.3s] performs better than HTP in both noiseless and noisy settings. In addition, it should be noted that CHTP with \beta = 2s performs better than HTP as s\leq 190 , and the success rate of the former suddenly drops to zero as sparsity level s approaches m/2 . This phenomenon is similar to the performance of SP in Figure 2. Simulations indicate that the performance of the hard thresholding operator can be improved by introducing a compressive step first.

    To investigate the effectiveness of CHTP, we compare it with other four mainstream algorithms: HTP, OMP, CoSaMP and SP. The corresponding results are displayed in Figures 2 and 3. Based on the above discussion on the performances of CHTP, the parameter \beta in this experiment is set as [1.1s] . Sparsity level s ranged from 120 to 258 with stepsize 3 in Figure 2(a), and it ranged from 1 to 298 with stepsize 3 in Figure 2(b). The comparison of recovery abilities for different algorithms is displayed in Figure 2(a) with accurate measurements and in Figure 2(b) with inaccurate measurements. Moreover, Figure 2(c) is an enlarged image of Figure 2(b). Figure 2 indicates that CHTP is competitive with HTP, OMP and SP in both noiseless and noisy scenarios. Furthermore, the recovery ability of CHTP can remarkably exceed that of CoSaMP. Figure 2(b) indicates that the sparse signal reconstruction abilities are significantly weakened in the presence of noise for all algorithms as the sparsity level s\leq 10 .

    The next problem we examine is the average number of iterations and average recovery runtime for different algorithms with accurate measurements. Due to the number of iterations of OMP being set as s , we just compare all algorithms except OMP in Figure 3, in which s\leq 120 is considered to ensure the success rates of all algorithms are 100%. Figure 3 shows that the larger s is taken, the more average number of iterations and computational time are required for all algorithms. From Figure 3(b), we see that the average recovery runtime for all algorithms are close to each other as s < 50 . Although CHTP needs more iterations than other algorithms in Figure 3(a), it consumes less time than CoSaMP and SP as s\geq 50 in Figure 3(b). From this point of view, CHTP is competitive with the other algorithms mentioned above.

    In [21,27], the authors point out that the hard thresholding operator is independent of the objective function, and this inherent drawback may cause the increase of the objective function in the iterative process. Furthermore, they suggest that the hard thresholding operator should be applied to a compressible vector to overcome this drawback. Motivated by this idea, we proposed the CHTP algorithm in this paper, which reduces to the standard HTP as \beta = s . To minimize the negative effect of the hard thresholding operator on the objective function, the orthogonal projection was used twice in CHTP. The convergence analysis of CHTP was established by utilizing the restricted isometry property of the sensing matrix. Numerical experiments indicated that the performance of CHTP with \beta = [1.1s] is better than in other cases. Simulations showed that CHTP is competitive with other popular algorithms such as HTP, OMP and SP, both in the sparse signal reconstruction ability and the average recovery runtime.

    The second author's work is supported by the National Natural Science Foundation of China (11771255), Young Innovation Teams of Shandong Province (2019KJI013) and Shandong Province Natural Science Foundation (ZR2021MA066). The fourth author's work is supported by the Natural Science Foundation of Henan Province (222300420520) and Key Scientific Research Projects of Higher Education of Henan Province (22A110020).

    All authors declare no conflicts of interest in this paper.



    [1] D. O. Jackson, T. Fukuda, O. Dunn, On a q-definite integrals, Quarterly J. Pure Appl. Math., 41 (1910), 193–203.
    [2] T. Ernst, A comprehensive treatment of q-calculus, Basel: Springer, 2012.
    [3] H. Gauchman, Integral inequalities in q-calculus, Comput. Math. Appl., 47 (2004), 281–300. doi: 10.1016/S0898-1221(04)90025-9
    [4] V. Kac, P. Cheung, Quantum calculus, New York: Springer, 2002.
    [5] J. Tariboon, S. K. Ntouyas, Quantum integral inequalities on finite intervals, J. Inequal. Appl., 2014 (2014), 121. doi: 10.1186/1029-242X-2014-121
    [6] J. Tariboon, S. K. Ntouyas, Quantum calculus on finite intervals and applications to impulsive difference equations, Adv. Differ. Equ., 2013 (2013), 282. doi: 10.1186/1687-1847-2013-282
    [7] M. A. Noor, K. I. Noor, M. U. Awan, Some quantum integral inequalities via preinvex functions, Appl. Math. Comput., 269 (2015), 242–251.
    [8] W. Sudsutad, S. K. Ntouyas, J. Tariboon, Quantum integral inequalities for convex functions, J. Math. Inequal., 9 (2015), 781–793.
    [9] Y. Zhang, T. S. Du, H. Wang, Y. J. Shen, Different types of quantum integral inequalities via (\alpha, m)-convexity, J. Inequal. Appl., 2018 (2018), 1–24. doi: 10.1186/s13660-017-1594-6
    [10] N. Alp, M. Z. Sarıkaya, M. Kunt, İ. İşcan, q-Hermite-Hadamard inequalities and quantum estimates for midpoint type inequalities via convex and quasi-convex functions, J. King Saud Univ. Sci., 30 (2018), 193–203. doi: 10.1016/j.jksus.2016.09.007
    [11] H. Kalsoom, S. Rashid, M. Idrees, Y. M. Chu, D. Baleanu, Two-variable quantum integral inequalities of simpson-type based on higher-order generalized strongly preinvex and quasi-preinvex functions, Symmetry, 12 (2020), 1–20.
    [12] Y. Deng, H. Kalsoom, S. Wu, Some new quantum Hermite-Hadamard-type estimates within a class of generalized (s, m) -preinvex functions, Symmetry, 11 (2019), 1283. doi: 10.3390/sym11101283
    [13] H. Kalsoom, J. Wu, S. Hussain, M. A. Latif, Simpson's type inequalities for co-ordinated convex functions on quantum calculus, Symmetry, 11 (2019), 768. doi: 10.3390/sym11060768
    [14] H. Kalsoom, M. Idrees, D. Baleanu, Y. M. Chu, New estimates of q_1q_2 -Ostrowski-type inequalities within a class of n-polynomial prevexity of functions, J. Funct. Spaces, 2020 (2020), 1–13.
    [15] X. You, H. Kara, H. Budak, H. Kalsoom, Quantum inequalities of Hermite-Hadamard type for r-convex functions, J. Math., 2021 (2021), 1–14.
    [16] H. Chu, H. Kalsoom, S. Rashid, M. Idrees, F. Safdar, Y. M. Chu, et al., Quantum analogs of Ostrowski-type inequalities for Raina's function correlated with coordinated generalized \Phi -convex functions, Symmetry, 12 (2020), 308. doi: 10.3390/sym12020308
    [17] T. S. Du, C. Y. Luo, B. Yu, Certain quantum estimates on the parameterized integral inequalities and their applications, J. Math. Inequal., 15 (2021), 201–228.
    [18] S. Bermudo, P. Kórus, J. N. Valdés, On q-Hermite-Hadamard inequalities for general convex functions, Acta Math. Hung., 162 (2020), 364–374. doi: 10.1007/s10474-020-01025-6
    [19] H. M. Srivastava, Operators of basic (or q-)calculus and fractional q-calculus and their applications in geometric function theory of complex analysis, Iran. J. Sci. Technol., Trans. A: Sci., 44 (2020), 327–344. doi: 10.1007/s40995-019-00815-0
    [20] J. Hadamard, Etude sur les proprié tés des fonctions entéres et en particulier dune fonction considerée par Riemann, J. Math. Pures Appl., 58 (1893), 171–215.
    [21] P. O. Mohammed, New generalized Riemann-Liouville fractional integral inequalities for convex functions, J. Math. Inequal., 15 (2021), 511–519.
    [22] H. M. Srivastava, Z. H. Zhang, Y. D. Wu, Some further refinements and extensions of the Hermite-Hadamard and Jensen inequalities in several variables, Math. Comput. Model., 54 (2011), 2709–2717. doi: 10.1016/j.mcm.2011.06.057
    [23] M. A. Alqudah, A. Kashuri, P. O. Mohammed, T. Abdeljawad, M. Raees, M. Anwar, et al., Hermite-Hadamard integral inequalities on coordinated convex functions in quantum calculus, Adv. Differ. Equ., 2021 (2021), 1–29. doi: 10.1186/s13662-020-03162-2
    [24] H. Kalsoom, S. Hussain, S. Rashid, Hermite-Hadamard type integral inequalities for functions whose mixed partial derivatives are co-ordinated preinvex, Punjab Univ. J. Math., 52 (2020), 63–76.
    [25] P. O. Mohammed, C. S. Ryoo, A. Kashuri, Y. S. Hamed, K. M. Abualnaja, Some Hermite-Hadamard and Opial dynamic inequalities on time scales, J. Inequal. Appl., 2021 (2021), 1–11. doi: 10.1186/s13660-020-02526-2
    [26] P. O. Mohammed, New integral inequalities for preinvex functions via generalized beta function, J. Interdiscip. Math., 22 (2019), 539–549. doi: 10.1080/09720502.2019.1643552
    [27] H. Kalsoom, S. Hussain, Some Hermite-Hadamard type integral inequalities whose n-times differentiable functions are s-logarithmically convex functions, Punjab Univ. J. Math., 2019 (2019), 65–75.
    [28] A. Fernandez, P. Mohammed, Hermite‐Hadamard inequalities in fractional calculus defined using Mittag‐Leffler kernels, Math. Methods Appl. Sci., 44 (2021), 8414–8431. doi: 10.1002/mma.6188
    [29] T. Weir, B. Mond, Preinvex functions in multiple objective optimization, J. Math. Anal. Appl., 136 (1988), 29–38. doi: 10.1016/0022-247X(88)90113-8
    [30] B. T. Polyak, Existence theorems and convergence of minimizing sequences in extremum problems with restrictions, Soviet Math. Dokl., 7 (1966), 72–75.
    [31] D. L. Zu, P. Marcotte, Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities, SIAM J. Optim., 6 (1996), 714–726. doi: 10.1137/S1052623494250415
    [32] K. Nikodem, Z. S. Pales, Characterizations of inner product spaces by strongly convex functions, Banach J. Math. Anal., 5 (2011), 83–87. doi: 10.15352/bjma/1313362982
  • This article has been cited by:

    1. Xiang Sha, Guolong Cui, Yanan Du, DOA Estimation Based on Distributed Array Optimization in Impulsive Noise, 2024, 146, 1937-8718, 103, 10.2528/PIERC24061304
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2483) PDF downloads(112) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog