Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Research article

Modified nonmonotonic projection Barzilai-Borwein gradient method for nonnegative matrix factorization

  • Received: 24 November 2023 Revised: 09 April 2024 Accepted: 23 April 2024 Published: 15 July 2024
  • MSC : 15A23, 65F30

  • In this paper, an active set recognition technique is suggested, and then a modified nonmonotonic line search rule is presented to enhance the efficiency of the nonmonotonic line search rule, in which we introduce a new parameter formula to attempt to control the nonmonotonic degree of the line search, and thus improve the chance of discovering the global minimum. By using a modified linear search and an active set recognition technique, a global convergence gradient solution for nonnegative matrix factorization (NMF) based on an alternating nonnegative least squares framework is proposed. We used a Barzilai-Borwein step size and greater step-size tactics to speed up the convergence. Finally, a large number of numerical experiments were carried out on synthetic and image datasets, and the results showed that our presented method was effective in calculating the speed and solution quality.

    Citation: Xiaoping Xu, Jinxuan Liu, Wenbo Li, Yuhan Xu, Fuxiao Li. Modified nonmonotonic projection Barzilai-Borwein gradient method for nonnegative matrix factorization[J]. AIMS Mathematics, 2024, 9(8): 22067-22090. doi: 10.3934/math.20241073

    Related Papers:

    [1] Ibtisam Aldawish, Mohamed Jleli, Bessem Samet . Blow-up of solutions to fractional differential inequalities involving ψ-Caputo fractional derivatives of different orders. AIMS Mathematics, 2022, 7(5): 9189-9205. doi: 10.3934/math.2022509
    [2] Tao Yan, Ghulam Farid, Sidra Bibi, Kamsing Nonlaopon . On Caputo fractional derivative inequalities by using strongly (α,hm)-convexity. AIMS Mathematics, 2022, 7(6): 10165-10179. doi: 10.3934/math.2022565
    [3] Muhammad Tariq, Asif Ali Shaikh, Sotiris K. Ntouyas, Jessada Tariboon . Some novel refinements of Hermite-Hadamard and Pachpatte type integral inequalities involving a generalized preinvex function pertaining to Caputo-Fabrizio fractional integral operator. AIMS Mathematics, 2023, 8(11): 25572-25610. doi: 10.3934/math.20231306
    [4] Hadjer Belbali, Maamar Benbachir, Sina Etemad, Choonkil Park, Shahram Rezapour . Existence theory and generalized Mittag-Leffler stability for a nonlinear Caputo-Hadamard FIVP via the Lyapunov method. AIMS Mathematics, 2022, 7(8): 14419-14433. doi: 10.3934/math.2022794
    [5] Choukri Derbazi, Hadda Hammouche . Caputo-Hadamard fractional differential equations with nonlocal fractional integro-differential boundary conditions via topological degree theory. AIMS Mathematics, 2020, 5(3): 2694-2709. doi: 10.3934/math.2020174
    [6] Deepak B. Pachpatte . On some ψ Caputo fractional Čebyšev like inequalities for functions of two and three variables. AIMS Mathematics, 2020, 5(3): 2244-2260. doi: 10.3934/math.2020148
    [7] M. A. Zaky, M. Babatin, M. Hammad, A. Akgül, A. S. Hendy . Efficient spectral collocation method for nonlinear systems of fractional pantograph delay differential equations. AIMS Mathematics, 2024, 9(6): 15246-15262. doi: 10.3934/math.2024740
    [8] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [9] Imran Abbas Baloch, Thabet Abdeljawad, Sidra Bibi, Aiman Mukheimer, Ghulam Farid, Absar Ul Haq . Some new Caputo fractional derivative inequalities for exponentially (θ,hm)–convex functions. AIMS Mathematics, 2022, 7(2): 3006-3026. doi: 10.3934/math.2022166
    [10] Muhammad Tariq, Hijaz Ahmad, Abdul Ghafoor Shaikh, Soubhagya Kumar Sahoo, Khaled Mohamed Khedher, Tuan Nguyen Gia . New fractional integral inequalities for preinvex functions involving Caputo-Fabrizio operator. AIMS Mathematics, 2022, 7(3): 3440-3455. doi: 10.3934/math.2022191
  • In this paper, an active set recognition technique is suggested, and then a modified nonmonotonic line search rule is presented to enhance the efficiency of the nonmonotonic line search rule, in which we introduce a new parameter formula to attempt to control the nonmonotonic degree of the line search, and thus improve the chance of discovering the global minimum. By using a modified linear search and an active set recognition technique, a global convergence gradient solution for nonnegative matrix factorization (NMF) based on an alternating nonnegative least squares framework is proposed. We used a Barzilai-Borwein step size and greater step-size tactics to speed up the convergence. Finally, a large number of numerical experiments were carried out on synthetic and image datasets, and the results showed that our presented method was effective in calculating the speed and solution quality.



    Order statistic (OS) plays an important role in nonparametric statistics. Under the assumption of large sample size, relative investigations are mainly focused on asymptotic distributions of some functions of these OSs. Among these studies, the elegant one provided by Bahadur in 1966 (see [1]) is the central limit theorem on OSs. As was revealed there, under the situation of an absolute continuous population, the sequence of some normalized OSs usually has an asymptotic standard normal distribution. That is useful in the construction of a confidence interval for estimating some certain quantile of the population. Comparatively, study on some moment convergence of the mentioned sequence is also significant, for instance, if we utilize a sample quantile as an asymptotic unbiased estimator for the corresponding quantile of the population, then the analysis of the second moment convergence of the sequence is significant if we want to make an approximation of the mean square error of the estimate.

    However, the analysis of moment convergence of OSs is usually very difficult, the reason, as was interpreted by Thomas and Sreekumar in [2], may lie in the fact that the moment of OS is usually very difficult to obtain.

    For a random sequence, although it is well-known that the convergence in distribution does not necessarily guarantee the corresponding moment convergence, usually, that obstacle can be sufficiently overcome by the additional requirement of the uniform integrability of the sequence. For instance, we can see [3] as a reference dealing with some extreme OSs under some populations. In that article Wang et al. discussed uniform integrability of the sequence of some normalized extreme OSs and derived equivalent moment expressions there.

    Here in the following theorem we discuss the moment convergence for some common OSs rather than extreme ones.

    Theorem 1. For a population X distributed according to a continuous probability density function (pdf) f(x), let p(0,1) and xp be the pquantile of X satisfying f(xp)>0. Let (X1,...,Xn) be a random sample arising from X and Xi:n be the ith OS. If the cumulative distribution function (cdf) F(x) of X has an inverse function G(x) satisfying

    |G(x)|Bxq(1x)q (1.1)

    for some constants B>0,q0 and all x(0,1), then for arbitrary δ>0, we have

    limnEXδi:n=xδp,

    provided limn+i/n=p or equivalently rewritten as i/n=p+o(1).

    Remark 1. Now we use the symbol z for the integer part of a positive number z and mn,p for the p-quantile of a random sample (X1,,Xn), namely, mn,p=(Xpn:n+Xpn+1:n)/2 if pn is an integer and mn,p=Xpn+1:n otherwise. As both limiting conclusions limnEXδpn:n=xδp and limnEXδpn+1:n=xδp hold under the conditions of Theorem 1 and mδn,p is always squeezed by Xδpn:n and Xδpn+1:n, according to the Sandwich Theorem, we have limnEmδn,p=xδp.

    Remark 2. For a continuous function H(x) where x(0,1), if

    limx0+H(x)=limx1H(x)=0,

    then there is a constant C>0 such that the inequality |H(x)|C holds for all x(0,1). By that reason, the condition (1.1) can be replaced by the statement that there exists some constant V0 such that

    limx0+G(x)xV(1x)V=limx1G(x)xV(1x)V=0.

    Remark 3. As the conclusion is on moment convergence of OSs, one may think that the moment of the population X in Theorem 1 should exist. That is a misunderstanding because the existence of the moment of the population is actually unnecessary. We can verify that by a population according to the well-known Cauchy distribution Xf(x)=1π(1+x2) where x(,+), in this case, the moment EX of the population does not exist whereas the required conditions in Theorem 1 are satisfied. Even for some population without any moment of positive order, the conclusion of Theorem 1 still holds, for instance, if f(x)=1x(ln(x))2I[e,)(x) (where the symbol IA(x) or IA stands for the indicator function of a set A), then we have the conclusion

    G(x)=e1x1I(0,1)(x),

    which leads to

    limx0+G(x)x(1x)=limx1G(x)x(1x)=0,

    and therefore the condition (1.1) holds, thus we can see that Theorem 1 is workable. That denies the statement in the final part of paper [4] exclaiming that under the situation Xf(x)=1x(ln(x))2I[e,)(x) any OS does not have any moment of positive order.

    According to Theorem 1, we known that the OS Xi:n of interest is an asymptotic unbiased estimator of the corresponding population quantile xp. Now we explore the infinitesimal type of the mean error of the estimate and derive

    Theorem 2. Let (X1,...,Xn) be a random sample from X who possesses a continuous pdf f(x). Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0 and Xi:n be the ith OS. If the cdf F(x) of X has an inverse function G(x) with a continuous derivative function G(x) in (0,1) and there is a constant U0 such that

    limx0+(G(x)xU(1x)U)=limx1(G(x)xU(1x)U)=0, (1.2)

    then under the assumption i/n=p+O(n1) which indicates the existence of the limit limx0+i/np1/n, the following proposition stands

    |E(Xi:nxp)|=O(1/n). (1.3)

    Remark 4. Obviously we can see that |E(mn,pxp)|=O(1/n) under the conditions of Theorem 2.

    For i.i.d random variables(RVs) X1,...,Xn with an identical expectation μ and a common finite standard deviation σ>0, the famous Levy-Lindeberg central limit theorem reveals that the sequence of normalized sums

    {ni=1Xinμnσ,n1}

    converges in distribution to the standard normal distribution N(0,12) which we denote that as

    ni=1XinμnσDN(0,12).

    In 1964, Bengt presented his work [5] showing that if it is further assumed that E|X1|k<+ for some specific positive k, then the m-th moment convergence conclusion

    E(ni=1Xinμnσ)mEZm,n+, (1.4)

    holds for any positive m satisfying mk. Here and throughout our paper, we denote Z a RV of standard normal distribution N(0,12).

    Let f(x) be a continuous pdf of a population X and xr be the rquantile of X satisfying f(xr)>0. Like the Levy-Lindeberg central limit theorem, Bahadur interpreted in [1] (1966) that for the OS Xi:n, following convergence conclusion holds

    f(xr)(Xi:nxr)r(1r)/nDN(0,12),

    provided i/nr as n.

    Later in 1967, Peter studied moment convergence on similar topic. He obtained in [6] that for some ε>0, r(0,1) and pn=i/n, if the limit condition

    limxxε[1F(x)+F(x)]=0

    holds, then the conclusion

    E(Xi:nxin+1)k=[pn(1pn)/nf(xpn)]kxk2πex2/2dx+o(nk/2)

    is workable for positive integer k and rni(1r)n as n+.

    In addition to the mentioned reference dealing with moment convergence on OSs, we find some more desirable conclusions on similar topic provided by Reiss in reference [7] in 1989, from which we excerpt the one of interest as what follows.

    Theorem 3. Respectively let f(x) and F(x) be the pdf and cdf of a population X. Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0. Assume that on a neighborhood of xp the cdf F(x) has m+1 bounded derivatives. If a positive integer i satisfies i/n=p+O(n1) and E|Xs:j|< holds for some positive integer j and s{1,...,j} and a measurable function h(x) meets the requirement |h(x)||x|k for some positive integer k, then

    Eh(n1/2f(xp)(Xi:nxp)p(1p))=h(x)d(Φ(x)+φ(x)m1i=1ni/2Si,n(x))+O(nm/2). (1.5)

    Here the function φ(x) and Φ(x) are respectively the pdf and cdf of a standard normal distribution while Si,n(x), a polynomial of x with degree not more than 3i1 and coefficients uniformly bounded over n, especially

    S1,n(x)=[2q13p(1p)+p(1p)f(xp)2(f(xp))2]x2+npi+1pp(1p)+2(2p1)3p(1p).

    Remark 5. By putting h(x)=x2 and m=2, we derive under the conditions of Theorem 3 that as n+,

    E(n1/2f(xp)(Xi:nxp)p(1p))2=x2d(Φ(x)+φ(x)n12S1,n(x))+O(n1)1.

    Therefore, we see that the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p))2,nN0}

    is uniformly bounded over nN0. Here N0 is the positive integer number that the moment EX2i:n exists when nN0. In accordance with the inequality |Eξ|Eξ2 if only the moment Eξ2 exists, the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p)),nN0}

    is also uniformly bounded, say, by a number L over n{N0,N0+1,...}. Now that

    |En1/2f(xp)(Xi:nxp)p(1p)|L,nN0,

    we have

    |E(Xi:nxp)|L[p(1p)/f(xp)]n1/2,nN0. (1.6)

    Under the conditions in Theorem 2, when we estimate a population quantile xp by an OS Xi:n, usually the estimate is not likely unbiased, compared with the two conclusions (1.3) and (1.6), the result (1.3) in Theorem 2 is more accurate.

    Remark 6. For a random sample (Y1,Y2,...,Yn) from a uniformly distributed population YU[0,1], we write Yi:n the ith OS. Obviously, conditions in Theorem 3 are fulfilled for any positive integer m2. That yields

    E(n1/2(Yi:np)p(1p))2=x2d(Φ(x)+φ(x)n1/2S1,n(x))+O(n1)=1+O(n1/2),

    and

    E(n1/2(Yi:np)p(1p))6=x6d(Φ(x)+φ(x)5i=1ni/2Si,n(x))+O(n3)=x6φ(x)dx+5i=1αi(n)ni/2+O(n3)=15+5i=1αi(n)ni/2+O(n3),

    where for each i=1,2,...,5, αi(n) is uniformly bounded over n.

    As is above analyzed, we conclude that under the assumption i/n=p+O(n1),

    E(Yi:np)2p(1p)n1andE(Yi:np)615p3(1p)3n3. (1.7)

    Based on Theorems 1 and 3, here we give some alternative conditions to those in Theorem 3 to embody its range of applications including situations even when the population X in Theorem 3 has no definite moment of any positive order. We obtain:

    Theorem 4. Let (X1,...,Xn) be a random sample derived from a population X who has a continuous pdf f(x). Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0 on a neighborhood of xp and the following three conditions hold,

    (i) The cdf F(x) of X has an inverse function G(x) satisfying

    |G(x)|BxQ(1x)Q (1.8)

    for some constants B>0,Q0 and all x(0,1).

    (ii) F(x) has m+1 bounded derivatives where m is a positive integer.

    (iii) Let i/n=p+O(n1) and ai:n=xp+O(n1) as n+.

    Then the following limiting result holds as n+

    E(f(xp)(Xi:nai:n)p(1p)/n)m=EZm+O(n1/2). (1.9)

    Remark 7. For the mean ¯Xn of the random sample (X1,...,Xn) of a population X whose moment EXm exists, according to conclusion (1.4), we see

    E(¯Xnμ)m=(σn)mEZm+o(nm/2),

    which indicates that the mth central moment of sample mean E(¯Xnμ)m is usually of infinitesimal O(nm/2).

    Here under the conditions of Theorem 4, if EXi,n=xp+O(n1) (we will verify in later section that for almost all continuous populations we may encounter, this assertion holds according to Theorem 2), then by Eq (1.9), we are sure that the central moment E(Xi:nEXi:m)m is also of an infinitesimal O(nm/2). Moreover, by putting ai:n=xp, we derive under the assumptions of Theorem 4 that

    E(f(xp)(Xi:nxp)p(1p)/n)m=EZm+O(n1/2).

    Similar to Remark 1, we can also show by Sandwich Theorem that

    E(f(xp)(mn,pxp)p(1p)/n)m=EZm+O(n1/2) (1.10)

    indicating that if we use the sample pquantile mn,p to estimate xp, the corresponding population pquantile, then E(mn,pxp)m=O(nm/2).

    For estimating a parameter of a population without an expectation, estimators based on functions of sample moments are always futile because of uncontrollable fluctuation. Alternatively, estimators obtained by some functions of OSs are usually workable. To find a desirable one of that kind, approximating some moment expressions of OSs is therefore significant. For instance, let a population X be distributed according to a pdf

    f(x,θ1,θ2)=θ2π[θ22+(xθ1)2],<x<+, (1.11)

    where constants θ2>0 and θ1 is unknown. Here x0.56=0.19076θ2+θ1 and x0.56+x0.44=2x0.5=2θ1. To estimate x0.5=θ1, we now compare estimators mn,0.5 and (mn,0.56+mn,0.44)/2. Under large sample size, we deduce according to conclusion (1.10) that

    E(mn,0.56+mn,0.442θ1)2=E((mn,0.56x0.56)+(mn,0.44x0.44)2)2E(mn,0.56x0.56)2+E(mn,0.44x0.44)22=0.44×0.56(f(x0.56))2n1+O(n3/2)=0.2554πθ2n1+O(n3/2),

    whereas

    E(mn,0.5θ1)2=0.785πθ2n1+O(n3/2).

    Obviously, both estimators mn,0.5 and (mn,0.56+mn,0.44)/2 are unbiased for θ1. For large n, the main part 0.2554πθ2n1 of the mean square error (MSE) E[(mn,0.56+mn,0.44)/2θ1]2 is even less than one-third of 0.785πθ2n1, the main part of the MSE E(mn,0.5θ1)2. That is the fundamental reason why Sen obtained in [8] the conclusion that the named optimum mid-range (mn,0.56+mn,0.44)/2 is more effective than the sample median mn,0.5 in estimating θ1.

    By statistical comparison of the scores presented in following Table 1 standing for 30 returns of closing prices of German Stock Index(DAX), Mahdizadeh and Zamanzade reasonably applied the previously mentioned Cauchy distribution (1.11) as a stock market return distribution with θ1 and θ2 being respectively estimated as ^θ1=0.0009629174 and ^θ2=0.003635871 (see [9]).

    Table 1.  Scores for 30 returns of closing prices of DAX.
    0.0011848 -0.0057591 -0.0051393 -0.0051781 0.0020043 0.0017787
    0.0026787 -0.0066238 -0.0047866 -0.0052497 0.0004985 0.0068006
    0.0016206 0.0007411 -0.0005060 0.0020992 -0.0056005 0.0110844
    -0.0009192 0.0019014 -0.0042364 0.0146814 -0.0002242 0.0024545
    -0.0003083 -0.0917876 0.0149552 0.0520705 0.0117482 0.0087458

     | Show Table
    DownLoad: CSV

    Now we utilize (mn,0.56+mn,0.44)/2 as a quick estimator of θ1 and derive a value 0.00105955 which roughly closes to the estimate value 0.0009629174 in reference [9].

    Even now there are many estimate problems (see [10] for a reference) dealing with situations when a population have no expectation, as above analysis, further study on moment convergence for some OSs may be promising.

    Lemma 1. (see [11] and [12]) For a random sequence {ξ1,ξ2,} converging in distribution to a RV ξ which we write as ξnDξ, if d>0 is a constant and the following uniform integrability holds

    limssupnE|ξn|dI|ξn|ds=0,

    then limnE|ξn|d=E|ξ|d and accordingly limnEξnd=Eξd.

    Remark 8. As discarding some definite number of terms from {ξ1,ξ2,} does not affect the conclusion limnE|ξn|d=E|ξ|d, the above condition lims+supnE|ξn|dI|ξn|ds=0 can be replaced by lims+supnME|ξdn|I|ξdn|s=0 for any positive constant M>0.

    Lemma 2. For p(0,1) and a random sample (ξ1,ξ2,,ξn) from a population possessing a continuous pdf f(x), if the p-quantile xp of the population satisfies f(xp)>0, then for the i-th OS ξi:n where i/n=p+o(1), we have ξi:nDxp.

    Proof. Obviously, the sequence {f(xp)(ξi:nxp)p(1p)/n,n=1,2,...} has an asymptotic standard normal distribution N(0,12), thus we see that the statistic ξi:n converges to xp in probability. That leads to the conclusion ξi:nDxp by the reason that, for a sequence of RVs, the convergence to a constant in probability is equivalent to the convergence in distribution.

    Clarification before presenting the proof:

    ● Under the assumption i/n=p+o(1) when n, we would better think of i as a function of n and use the symbol an instead of i. Nevertheless, for simplicity concern, we prefer no adjustment.

    ● Throughout our paper, C1, C2, are some suitable positive constants.

    As inp(0,1) when n, we only need care large numbers n,i and ni.

    Let an integer K>δq be given and M>0 be such a number that if nM, then all the following inequalities i1δq>0, niδq>0, niK>0 and i+Kn<v=1+p2 hold simultaneously. Here the existence of v in the last inequality is ensured by the fact i+Knp as n.

    According to Lemmas 1 and 2 as well as Remark 8, to prove Theorem 1 we only need to show that

    limsδ+supnME|Xδi:n|I|Xδi:n|sδ=0. (3.1)

    That is

    lims+supnM|u|s|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0.

    To show that equation, it suffices for us to prove respectively

    lims+supnM+s|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0

    and

    lims+supnMs|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0.

    Equivalently by putting x=F(u), we need to prove respectively

    limt1supnM1t|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidx=0 (3.2)

    as well as

    limt0+supnMt0|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidx=0.

    As both proofs are similar in fashion, we chose to prove the Eq (3.2) only. Actually, according to the given condition |G(x)|Bxq(1x)q, we see

    limt1supnM1t|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidxBδlimt1supnM1tn!(i1)!(ni)!xi1δq(1x)niδqdxBδlimt1supnM1tn!(i1)!(ni)!(1x)niδqdxBδlimt1supnMn!(i1)!(ni)!(1t)niK(1t)K+1δqBδlimt1supnMn!(1t)niK(i1)!(ni)!C1limx0+supnMn!×ni!(ni)!xniK. (3.3)

    Here the positive number C1>0 exists because n/i=1/p+o(1) where p(0,1).

    Now applying the Stirling's formula n!=2πn(n/e)neθ12n where θ(0,1) (see [13]), we have

    limx0+supnMn!×ni!(ni)!xniKC2limx0+supnM2πn(n/e)n×n2πi(i/e)i2π(ni)((ni)/e)nixniKC3limx0+supnMnn×niini(ni)nixniK=C3limx0+supnMnnii(ni)ninnixniK=C3limx0+supnM1(in)i(1in)ninnixniK=C3limx0+supnM1[(in)in(1in)1in]nnnixniK. (3.4)

    Noting that

    (in)in(1in)1inpp(1p)1p,

    as n, we see that there exists a positive constant, say Q>0 such that

    (in)in(1in)1inQpp(1p)1p

    for all n. Consequently,

    limx0+supnM1[(in)in(1in)1in]nnnixniKlimx0+supnM1[Qpp(1p)1p]nnnixniKC4limx0+supnM1[Qpp(1p)1p]nnxniK. (3.5)

    Due to the assumptions i+Kn<v=1+p2<1 as nM, we derive

    limx0+supnM1[Qpp(1p)1p]nnxniKlimx0+supnM1[Qpp(1p)1p]nnxnvn=limx0+supnM[x1vQpp(1p)1p]nnlimu0+supn1unn. (3.6)

    Finally, by the fact that if u>0 is given sufficiently small, then the first term of the sequence {unn,n1} is the maximum, thus we can confirm

    limu0+supn1unn=limu0+u=0. (3.7)

    Combining the five conclusions numbered from (3.3) to (3.7), we obtain Eq (3.2).

    Here we would like to assume U>1 (or we may use U+2 instead of U).

    By the reason interpreted in Remark 2 and according to condition (1.2), we see that there is a constant A>0 satisfying

    |G(x)xU(1x)U|A. (3.8)

    Now we define Y=F(X) and Yi:n=F(Xi:n) or equivalently X=G(Y) and Xi:n=G(Yi,n), we have G(p)=xp. Obviously, the conclusions in Remark 6 are workable here.

    By the Taylor expansion formula we have

    G(Yi:n)=G(p)+G(p)(Yi:np)+G(p)2!(Yi:np)2+13!G(ξ)(Yi:np)3,

    where

    ξ(min(Yi:n,p),max(Yi:n,p)).

    Noting that almost surely 0<min(Yi:n,p)<ξ<max(Yi:n,p)<1, we obtain

    |EG(Yi:n)G(p)G(p)E(Yi:np)G(p)2E(Yi:np)2|=|E[G(ξ)3!(Yi:np)3]|16|E[AξU(1ξ)U(Yi:np)3]|16|E{A[p(1p)]UYUi:n(1Yi:n)U(Yi:np)3}|16|E{A[p(1p)]U(Yi:np)3}|16A[p(1p)]UE(Yi:np)6=O(n3/2) (3.9)

    by Eq (3.8). Here the last step is in accordance to (1.7).

    Now we can draw the conclusion that

    EG(Yi:n)G(p)G(p)E(Yi:np)12G(p)E(Yi:np)2=o(n1). (3.10)

    That is

    EXi:nxpG(p)(in+1p)12G(p)E(Yi:np)2=o(n1), (3.11)

    provided i/n=p+O(n1).

    Still according to conclusion (1.7), we have

    E(Yi:np)2=O(n1).

    Finally, as i/n=p+O(n1) also guarantees i/(n+1)p=O(n1), we can complete the proof of E(Xi:nxp)=O(n1) or equivalently

    |E(Xi:nxp)|=O(n1)

    by the assertion of (3.11).

    As EZ=0, the proposition holds when m=1, now we only consider the case of m2. By Theorem 1, we see EX2i:nx2p, therefore E|Xs:j| exists for some integer j and s{1,...,j} and Theorem 3 is workable here when we put h(x)=xm. We derive

    E(n1/2f(xp)(Xi:nxp)p(1p))m=xmd(Φ(x)+φ(x)m1i=1ni/2Si,n(x))+O(nm/2)=EZm+m1i=1(ni/2xmd(φ(x)Si,n(x)))+O(nm/2). (3.12)

    Moreover, for given positive integer m2, as the coefficients in polynomial Si,n(x) are uniformly bounded over n and φ(x)=xφ(x), the sequence of the integrals

    {xmd(φ(x)Si,n(x)),n=1,2,...}

    is also uniformly bounded over n. That indicates that

    E(n1/2f(xp)(Xi:nxp)p(1p))m=EZm+O(n1/2) (3.13)

    according to conclusion (3.12).

    As a consequence, we can conclude that for explicitly given m2 the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p))m,n=1,2,...} (3.14)

    is uniformly bounded over n. Moreover, due to the inequality

    |E(n1/2f(xp)(Xi:nxp)p(1p))|E(n1/2f(xp)(Xi:nxp)p(1p))2,

    we see that the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p)),n=1,2,...}

    is also uniformly bounded over n.

    Now that ai:n=xp+O(n1), we complete the proof by the following reasoning

    E(n1/2f(xp)(Xi:nai:n)p(1p))m=E(n1/2f(xp)(Xi:nxp)p(1p)+n1/2f(xp)(xpai:n)p(1p))m=mu=0[(mu)(n1/2f(xp)(xpai:n)p(1p))muE(n1/2f(xp)(Xi:nxp)p(1p))u]=mu=2[(mu)(n1/2f(xp)(xpai:n)p(1p))muE(n1/2f(xp)(Xi:nxp)p(1p))u]+O(n1/2)=mu=2[(mu)(n1/2f(xp)(xpai:n)p(1p))mu(EZu+O(n1/2))]+O(n1/2)=EZm+O(n1/2). (3.15)

    Now we consider the applicability of our theorems obtained so far. As other conditions can be trivially or similarly verified, here we mainly focus on the verification of condition (1.2).

    Example 1: Let the population X have a Cauchy distribution with a pdf f(y)=1π(1+y2),<y<+, correspondingly the inverse function of the cdf of X can be figured out to be

    G(x)=1tan(πx),0<x<1,

    satisfying

    limx0+G(x)x5(1x)5=limx1G(x)x5(1x)5=0.

    Example 2: For Xf(x)=1x(ln(x))2I[e,)(x), we have

    G(x)=e1x1I(0,1)(x),

    and

    limx0+G(x)x(1x)=limx1G(x)x(1x)=0.

    Example 3: For XN(0,12), on that occasion, f(y)=12πey22, f(y)=yf(y) and y=G(x)x=F(y)=y12πet22dt, therefore, as x0+, we have

    (G(x))2ln(x(1x))(G(x))2lnx=y2ln(F(y))y2yF(y)f(y)=2[(yF(y))(f(y))]=2[F(y)+yf(y)yf(y)]=2[F(y)yf(y)1].

    Noting that as x=F(y)0+ or equivalently y,

    F(y)yf(y)f(y)f(y)yf(y)=f(y)f(y)+y2f(y)=11+y20, (4.1)

    we have as x0+,

    (G(x))2ln(x(1x))2.

    By the same fashion, we can show as x1 that

    (G(x))2ln(x(1x))2.

    In conclusion, for x0+ as well as for x1,

    (G(x))22ln(x(1x)). (4.2)

    Accordingly, there exists a positive M>0 such that for all x(0,1),

    (G(x))2M|ln(x(1x))|=Mln(x(1x)). (4.3)

    No matter if x0+ or x1, we get

    |G(x)|=|f(y)f(y)+3(f(y))2(f(y))5|=|(y21)(f(y))2+3(yf(y))2(f(y))5|=2y2+1(f(y))3|y|2y2(f(y))3=2(G(x))2(f(G(x)))34ln(x(1x))(f(G(x)))3. (4.4)

    Here the last step holds in accordance to Eq (4.2).

    For x0+ as well as for x1,

    4ln(x(1x))(f(G(x)))3=4ln(x(1x))(12π)3exp(3(G(x))22)=4(2π)3[ln(x(1x))]exp(3(G(x))22)=4(2π)3[ln(x(1x))][exp((G(x))2)]344(2π)3[ln(x(1x))][exp(Mln(x(1x)))]34=4(2π)3[ln(x(1x))](x(1x))3M4. (4.5)

    Thus we can see the achievement of condition (1.2) by

    limx0+(G(x)xM(1x)M)=limx1(G(x)xM(1x)M)=0. (4.6)

    Remark 9. For a RV X with a cdf F(x) possessing an inverse function G(x), we can prove that if σ>0 and μ(,+) are constants, then the cdf of the RV σX+μ will have an inverse function σG(x)+μ. Thus for the general case XN(μ,σ2), we can still verify the condition (1.2).

    Example 4: For a population XU[a,b], G(x)=(ba)x+a is the inverse function of the cdf of X. As G(x)=0, the assumption of condition (1.2) holds.

    Generally, for any population distributed over an interval [a,b] according to a continuous pdf f(x), if G(0+) and G(1) exist, then the condition (1.2) holds.

    For length concern, here we only point out without detailed proof that for a population X according to a distribution such as Gamma distribution (including special cases such as the Exponential and the Chi-square distributions) and beta distribution and so on, the requirement of condition (1.2) can be satisfied.

    For a random sample (X1,...,Xn) derived from a population X which is uniformly distributed over the interval [0,1], the moment of the ith OS EXi:n=i/(n+1)p if i/np(0,1) as n. Let ai:n=i/n. According to conclusion (1.9) where f(xp)=1 and xp=p(0,1), we have for integer m2,

    E(Xi:nai:n)m=10(xin)mn!(i1)!(ni)!xi1(1x)nidx+o(nm/2)=EZm(p(1p))m2nm2+o(nm/2). (4.7)

    That results in

    n!10(nxi)mxi1(1x)nidx(i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2), (4.8)

    or equivalently

    n!mj=0[(mj)nj(i)mjB(i+j,n+1i)](i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2).

    Consequently we have the following equation

    n!mj=0[(mj)nj(i)mjΓ(i+j)Γ(n+1i)Γ(i+j+n+1i)](i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2),

    which yields

    n!mj=0[(mj)nj(i)mj(i1+j)!(n+j)!](i1)!nm=EZm(pp2)m2nm2+o(nm/2). (4.9)

    As i/np(0,1) when n+, the above equation indicates that

    mj=0[(mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!]n2m=EZm(pp2)m2nm2+o(nm/2). (4.10)

    For convenience sake, now we denote vk=u=0 and vk=u=1 if v<u. Noting for given explicit integers m2 and j{0,1,...,m} the expression

    (mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!=(mj)(1)mj(imjjk=1[(i1)+k])(njmk=j+1(n+k)) (4.11)

    is a multinomial of i and n. We see that the nominator of the LHS of Eq (4.10) is also a multinomial which we now denote as

    mj=0[(mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!]:=ms=0mt=0a(m)s,timsnmt.

    Equivalently, we derive

    mj=0{(mj)(1)mj[imjjk=1(i1+k)][njmk=j+1(n+k)]}=2mk=0s+t=ka(m)s,timsnmt.

    By Eq (4.10), we see for any given p(0,1), if i/np(0,1) as n+, then

    2mk=0s+t=ka(m)s,timsnmtn3m/2=EZm(p(1p))m2+o(1). (4.12)

    Noting that

    s+t=ka(m)s,timsnmt=(s+t=ka(m)s,tpms)n2mk+o(n2mk),

    we see in accordance to (4.12) that

    2mk=0[(s+t=ka(m)s,tpms)n2mk+o(n2mk)]n3m/2=EZm(pp2)m2+o(1). (4.13)

    That indicates that if a non-negative integer k satisfies 2mk>3m/2, or equivalently 0k<m/2, then the coefficient of n2mk in the nominator of LHS of Eq (4.13) must be zero for any given p(0,1), namely

    s+t=ka(m)s,tpms=0,s+t=k<m/2

    holds for any p(0,1). Thereby, for the case of non-negative integers s and t satisfying s+t=k<m/2, we see that the equation a(m)s,t=0 surely holds.

    It is funny to notice that for big m, we immediately have the following three corresponding equations

    mj=0(1)mj(mj)=0,
    mj=2(mj)(1)mjj(j1)2=0,

    and

    m1j=2(mj)(1)mjj(j1)2(mj)(m+j+1)2=0,

    according to the conclusions a(m)0,0=0, a(m)1,0=0 and a(m)1,1=0.

    As for the structure of a(m)s,t when s2, t1 and m>2(s+t), obviously s<mt holds on this occasion and the term a(m)s,timsnmt in the multinomial

    mj=0{(mj)(1)mj[imjjk=1(i1+k)][njmk=j+1(n+k)]}=mj=0{(mj)(1)mj[imjj1k=0(i+k)][njmk=j+1(n+k)]}=mj=0{(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}=(sj=0+mtj=s+1+mj=mt+1){(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}

    is also the term a(m)s,timsnmt in the multinomial

    mtj=s+1{(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}.

    Noting for given j{s,...,mt}, the monomial

    (1u1<u2<...<usj1u1u2...rs)ims

    is the term with degree ms in the polynomial of i

    [imj+1j1k=1(i+k)],

    while the monomial

    (j+1v1<v2<...<vtmv1v2...vt)nmt

    is the term with degree mt in the polynomial of n

    [njmk=j+1(n+k)],

    we see for s+t<m/2,

    a(m)s,t=mtj=s+1((mj)(1)mj1u1<...<usj1u1...usj+1v1<...<vtmv1...vt).

    Now that ams,t=0 holds provided s+t=k<m/2 according to Eq (4.13), we conclude the following Theorem.

    Theorem 5. If s, t and m are integers satisfying s2, t1 and m>2(s+t), then

    mtj=s+1((mj)(1)mj1u1<u2<...<usj1u1...usj+1v1<v2<...<vtmv1...vt)=0.

    Example 5: For big integer m, according to Theorem 5, we have a(m)2,1=0 and a(m)2,2=0. Correspondingly, we obtain equations

    m1j=3((mj)(1)mj(j1i=1i)2(j1i=1i2)2(m+j+1)(mj)2)=0,

    and

    m2j=3((mj)(1)mj(j1i=1i)2(j1i=1i2)2(mi=j+1i)2(mi=j+1i2)2)=0.

    Both equations can be verified by the aid of Maple software.

    Let real δ>0 and integer m>0 be given. For a population satisfying condition (1.1), no matter if the population has an expectation or not, the moment of Xδi:n exists and the sequence {EXδi:n,n1} converges for large i and n satisfying i/np(0,1). Under some further trivial assumptions, for large integer n the mth moment of the standardized sequence {Xi:n,n1} can be approximated by the mth moment of a standard normal distribution EZm.

    Due to the fact that the existence requirement of some expectation Xs:j in Theorem 3 has always been hard to be verified for a population without an expectation, for a long time, real-life world data corresponding to that population of interest has been unavailable in the vast majority of references. Now that the alternative condition (1.8) is presented, maybe things will improve in the future and we still have a long way to go.

    This work was supported by the Science and Technology Plan Projects of Jiangxi Provincial Education Department, grant number GJJ180891.

    There exists no conflict of interest between authors.



    [1] M. Ahookhosh, K. Amini, S. Bahrami, A class of nonmonotone Armijo-type line search method for unconstrained optimization, Optimization, 61 (2012), 387–404. https://doi.org/10.1080/02331934.2011.641126 doi: 10.1080/02331934.2011.641126
    [2] E. G. Birgin, J. M. MartIˊnez, M. Raydan, Nonmonotone spectral projected gradient methods on convex sets, SIAM J. Optimiz., 10 (2000), 1196–1211. https://doi.org/10.1137/S1052623497330963 doi: 10.1137/S1052623497330963
    [3] J. Barzilai, J. M. Borwein, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), 141–148. https://doi.org/10.1093/imanum/8.1.141 doi: 10.1093/imanum/8.1.141
    [4] S. Bonettini, Inexact block coordinate descent methods with application to non-negative matrix factorization, IMA J. Numer. Anal., 31 (2011), 1431–1452. https://doi.org/10.1093/imanum/drq024 doi: 10.1093/imanum/drq024
    [5] A. Cichocki, R. Zdunek, S. Amari, Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization, In: Independent Component Analysis and Signal Separation, Heidelberg: Springer, 2007,169–176. https://doi.org/10.1007/978-3-540-74494-8_22
    [6] A. Cristofari, M. D. Santis, S. Lucidi, F. Rinaldi, A two-stage active-set algorithm for bound-constrained optimization, J. Optim. Theory Appl., 172 (2017), 369–401. https://doi.org/10.1007/s10957-016-1024-9 doi: 10.1007/s10957-016-1024-9
    [7] Y. H. Dai, On the nonmonotone line search, J. Optim. Theory Appl., 112 (2002), 315–330. https://doi.org/10.1023/A:1013653923062 doi: 10.1023/A:1013653923062
    [8] Y. H. Dai, L. Z. Liao, R-Linear convergence of the Barzilai-Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1–10. https://doi.org/10.1093/imanum/22.1.1 doi: 10.1093/imanum/22.1.1
    [9] P. Deng, T. R. Li, H. J. Wang, D. X. Wang, S. J. Horng, R. Liu, Graph regularized sparse non-negative matrix factorization for clustering, IEEE Transactions on Computational Social Systems, 10 (2023), 910–921. https://doi.org/10.1109/TCSS.2022.3154030 doi: 10.1109/TCSS.2022.3154030
    [10] P. Deng, F. Zhang, T. R. Li, H. J. Wang, S. J. Horng, Biased unconstrained non-negative matrix factorization for clustering, Knowl.-Based Syst., 239 (2022), 108040. https://doi.org/10.1016/j.knosys.2021.108040 doi: 10.1016/j.knosys.2021.108040
    [11] N. Gillis, The why and how of nonnegative matrix factorization, 2014, arXiv: 1401.5226. https://doi.org/10.48550/arXiv.1401.5226
    [12] R. Glowinski, Numerical methods for nonlinear variational problems, Heidelberg: Springer, 1984. https://doi.org/10.1007/978-3-662-12613-4
    [13] P. H. Gong, C. S. Zhang, Efficient nonnegative matrix factorization via projected Newton method, Pattern Recogn., 45 (2012), 3557–3565. https://doi.org/10.1016/j.patcog.2012.02.037 doi: 10.1016/j.patcog.2012.02.037
    [14] N. Z. Gu, J. T. Mo Incorporating nonmonotone strategies into the trust region method for unconstrained optimization, Comput. Math. Appl., 55 (2008), 2158–2172. https://doi.org/10.1016/j.camwa.2007.08.038
    [15] N. Y. Guan, D. C. Tao, Z. G. Luo, B. Yuan NeNMF: An optimal gradient method for nonnegative matrix factorization, IEEE T. Signal Proces., 60 (2012), 2882–2898. https://doi.org/10.1109/TSP.2012.2190406
    [16] L. X. Han, M. Neumann, U. Prasad, Alternating projected Barzilai-Borwein methods for nonnegative matrix factorization, Electronic Transactions on Numerical Analysis, 36 (2009), 54–82. https://doi.org/10.1007/978-0-8176-4751-3_16 doi: 10.1007/978-0-8176-4751-3_16
    [17] G. Hu, B. Du, X. F. Wang, G. Wei, An enhanced black widow optimization algorithm for feature selection, Knowl.-Based Syst., 235 (2022), 107638. https://doi.org/10.1016/j.knosys.2021.107638 doi: 10.1016/j.knosys.2021.107638
    [18] G. Hu, J. Y. Zhong, G. Wei, SaCHBA_PDN: Modified honey badger algorithm with multi-strategy for UAV path planning, Expert Syst. Appl., 223 (2023), 119941. https://doi.org/10.1016/j.eswa.2023.119941 doi: 10.1016/j.eswa.2023.119941
    [19] G. Hu, J. Y. Zhong, G. Wei, C. T. Chang, DTCSMO: An efficient hybrid starling murmuration optimizer for engineering applications, Comput. Method. Appl. M., 405 (2023), 115878. https://doi.org/10.1016/j.cma.2023.115878 doi: 10.1016/j.cma.2023.115878
    [20] G. Hu, J. Wang, M. Li, A. G. Hussien, M. Abbas, EJS: Multi-strategy enhanced jellyfish search algorithm for engineering applications, Mathematics, 11 (2023), 851. https://doi.org/10.3390/math11040851 doi: 10.3390/math11040851
    [21] G. Hu, R. Yang, X. Q. Qin, G. Wei, MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications, Comput. Method. Appl. M., 403 (2022), 115676. https://doi.org/10.1016/j.cma.2022.115676 doi: 10.1016/j.cma.2022.115676
    [22] G. Hu, X. N. Zhu, G. Wei, C. Chang, An marine predators algorithm for shape optimization of developable Ball surfaces, Eng. Appl. Artif. Intel., 105 (2021), 104417. https://doi.org/10.1016/j.engappai.2021.104417 doi: 10.1016/j.engappai.2021.104417
    [23] Y. K. Huang, H. W. Liu, S. S. Zhou, Quadratic regularization projected alternating Barzilai-Borwein method for nonnegative matrix factorization, Data Min. Knowl. Disc., 29 (2015), 1665–1684. https://doi.org/10.1007/s10618-014-0390-x doi: 10.1007/s10618-014-0390-x
    [24] Y. K. Huang, H. W. Liu, S. Zhou, An efficint monotone projected Barzilai-Borwein method for nonnegative matrix factorization, Appl. Math. Lett., 45 (2015), 12–17. https://doi.org/10.1016/j.aml.2015.01.003 doi: 10.1016/j.aml.2015.01.003
    [25] D. D. Lee, H. S. Seung, Learning the parts of objects by non-negative matrix factorization, Nature, 401 (1999), 788–791. https://doi.org/10.1038/44565 doi: 10.1038/44565
    [26] D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, Advances in Neural Processing Information Systems, 13 (2001), 556–562.
    [27] X. L. Li, H. W. Liu, X. Y. Zheng, Non-monotone projection gradient method for non-negative matrix factorization, Comput. Optim. Appl., 51 (2012), 1163–1171. https://doi.org/10.1007/s10589-010-9387-6 doi: 10.1007/s10589-010-9387-6
    [28] H. W. Liu, X. L. Li, Modified subspace Barzilai-Borwein gradient method for non-negative matrix factorization, Comput. Optim. Appl., 55 (2013), 173–196. https://doi.org/10.1007/s10589-012-9507-6 doi: 10.1007/s10589-012-9507-6
    [29] C. J. Lin, Projected gradient methods for non-negative matrix factorization, Neural Comput., 19 (2007), 2756–2779. https://doi.org/10.1162/neco.2007.19.10.2756 doi: 10.1162/neco.2007.19.10.2756
    [30] H. Nosratipour, A. H. Borzabadi, O. S. Fard, On the nonmonotonicity degree of nonmonotone line searches, Calcolo, 54 (2017), 1217–1242. https://doi.org/10.1007/s10092-017-0226-3 doi: 10.1007/s10092-017-0226-3
    [31] D. Kim, S. Sra, I. S. Dhillon, Fast Newton-type methods for the least squares nonnegative matrix approximation problem, SIAM International Conference on Data Mining, 1 (2007), 343–354. https://doi.org/10.1137/1.9781611972771.31 doi: 10.1137/1.9781611972771.31
    [32] P. Paatero, U. Tapper, Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values, Environmetrics, 5 (1994), 111–126. https://doi.org/10.1002/env.3170050203 doi: 10.1002/env.3170050203
    [33] M. Raydan, On the Barzilai-Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321–326. https://doi.org/10.1093/imanum/13.3.321 doi: 10.1093/imanum/13.3.321
    [34] M. Raydan, The Barzilai and Borwein gradient method for the large-scale unconstrained minimization problem, SIAM J. Optimiz., 7 (1997), 26–33. https://doi.org/10.1137/S1052623494266365 doi: 10.1137/S1052623494266365
    [35] D. X. Wang, T. R. Li, P. Deng, J. Liu, W. Huang, F. Zhang, A generalized deep learning algorithm based on NMF for multi-view clustering, IEEE T. Big Data, 9 (2023), 328–340. https://doi.org/10.1109/TBDATA.2022.3163584 doi: 10.1109/TBDATA.2022.3163584
    [36] D. X. Wang, T. R. Li, P. Deng, F. Zhang, W. Huang, P. F. Zhang, et al., A generalized deep learning clustering algorithm based on non-negative matrix factorization, ACM T. Knowl. Discov. D., 17 (2023), 1–20. https://doi.org/10.1145/3584862 doi: 10.1145/3584862
    [37] D. X. Wang, T. R. Li, W. Huang, Z. P. Luo, P. Deng, P. F. Zhang, et al., A multi-view clustering algorithm based on deep semi-NMF, Inform. Fusion, 99 (2023), 101884. https://doi.org/10.1016/j.inffus.2023.101884 doi: 10.1016/j.inffus.2023.101884
    [38] Z. J. Wang, Z. S. Chen, L. Xiao, Q. Su, K. Govindan, M. J. Skibniewski, Blockchain adoption in sustainable supply chains for Industry 5.0: A multistakeholder perspective, J. Innov. Knowl., 8 (2023), 100425. https://doi.org/10.1016/j.jik.2023.100425 doi: 10.1016/j.jik.2023.100425
    [39] Z. J. Wang, Z. S. Chen, S. Qin, K. S. Chin, P. Witold, M. J. Skibniewski, Enhancing the sustainability and robustness of critical material supply in electrical vehicle market: An AI-powered supplier selection approach, Ann. Oper. Res., 2023 (2023), 102690. https://doi.org/10.1007/s10479-023-05698-4 doi: 10.1007/s10479-023-05698-4
    [40] Z. J. Wang, Y. Y. Sun, Z. S. Chen, G. Z. Feng, Q. Su, Optimal versioning strategy of enterprise software considering the customer cost-acceptance level, Kybernetes, 52 (2023), 997–1026. https://doi.org/10.1108/K-04-2021-0339 doi: 10.1108/K-04-2021-0339
    [41] Z. J. Wang, Y. Y. Sun, Q. Su, M. Deveci, K. Govindan, M. J. Skibniewski, et al., Smart contract application in resisting extreme weather risks for the prefabricated construction supply chain: prototype exploration and assessment, Group Decis. Negot., (2024). https://doi.org/10.1007/s10726-024-09877-x
    [42] Y. H. Xiao, Q. J. Hu, Subspace Barzilai-Borwein gradient method for large-scale bound constrained optimization, Appl. Math. Optim., 58 (2008), 275–290. https://doi.org/10.1007/s00245-008-9038-9 doi: 10.1007/s00245-008-9038-9
    [43] Y. H. Xiao, Q. J. Hu, Z. X. Wei, Modified active set projected spectral gradient method for bound constrained optimization, Appl. Math. Model., 35 (2011), 3117–3127. https://doi.org/10.1016/j.apm.2010.09.011 doi: 10.1016/j.apm.2010.09.011
    [44] Y. Y. Xu, W. T. Yin, A block coordinate descent method for regularized multi-convex optimization with applications to nonnegative tensor factorization and completion, SIAM J. Imaging Sci., 6 (2013), 1758–1789. https://doi.org/10.1137/120887795 doi: 10.1137/120887795
    [45] H. C. Zhang, W. W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optimiz., 14 (2004), 1043–1056. https://doi.org/10.1137/S1052623403428208 doi: 10.1137/S1052623403428208
    [46] R. Zdunek, A. Cichocki, Fast nonnegative matrix factorization algorithms using projected gradient approaches for large-scale problems, Comput. Intel. Neurosc., 2008 (2008), 939567. https://doi.org/10.1155/2008/939567 doi: 10.1155/2008/939567
  • This article has been cited by:

    1. Saeed M. Ali, Mohammed D. Kassim, Sufficient Conditions for the Non-Existence of Global Solutions to Fractional Systems with Lower-Order Hadamard-Type Fractional Derivatives, 2025, 13, 2227-7390, 1031, 10.3390/math13071031
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1106) PDF downloads(49) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog