Loading [MathJax]/jax/output/SVG/jax.js
Research article

A redistributed cutting plane bundle-type algorithm for multiobjective nonsmooth optimization

  • Received: 08 January 2022 Revised: 16 April 2022 Accepted: 21 April 2022 Published: 05 May 2022
  • MSC : 49J52, 65K10, 90C26, 90C29

  • I construct a new cutting-plane model for approximating nonsmooth nonconvex functions in multiobjective optimization and propose a new bundle-type method with the help of an improvement function. The presented bundle method possesses three features. Firstly, the objective and constraint functions are approximated by a new cutting-plane model, which is a local convexification of the corresponding functions, instead of the entire approximation for the functions, as most bundle methods do. Secondly, the subgradients and values of the objective and constraint functions are computed approximately. In other words, approximate calculation is applied to the method, and the proposed algorithm is doubly approximate to some extent. Thirdly, the introduction of the improvement function eliminates the necessity of employing any scalarization, which is the usual method when dealing with multiobjective optimization. Under reasonable conditions satisfactory convergence results are obtained.

    Citation: Jia-Tong Li. A redistributed cutting plane bundle-type algorithm for multiobjective nonsmooth optimization[J]. AIMS Mathematics, 2022, 7(7): 12827-12841. doi: 10.3934/math.2022710

    Related Papers:

    [1] Faizan A. Khan, Rohit K. Bhardwaj, Tirth Ram, Mohammed A. S. Tom . On approximate vector variational inequalities and vector optimization problem using convexificator. AIMS Mathematics, 2022, 7(10): 18870-18882. doi: 10.3934/math.20221039
    [2] Kin Keung Lai, Shashi Kant Mishra, Geetanjali Panda, Md Abu Talhamainuddin Ansary, Bhagwat Ram . On q-steepest descent method for unconstrained multiobjective optimization problems. AIMS Mathematics, 2020, 5(6): 5521-5540. doi: 10.3934/math.2020354
    [3] Zhuolin Yan, Xiaowei Jiang, Siyao Wang . Objective penalty function method for nonlinear programming with inequality constraints. AIMS Mathematics, 2024, 9(12): 33572-33590. doi: 10.3934/math.20241602
    [4] Álvaro Antón-Sancho . The C-action and stratifications of the moduli space of semi-stable Higgs bundles of rank 5. AIMS Mathematics, 2025, 10(2): 3428-3456. doi: 10.3934/math.2025159
    [5] Yan Ning, Daowei Lu, Anmin Mao . Existence and subharmonicity of solutions for nonsmooth p-Laplacian systems. AIMS Mathematics, 2021, 6(10): 10947-10963. doi: 10.3934/math.2021636
    [6] Xin Wang, Yuxiang Liu, Lisha Wang . Necessary optimality conditions for two-step descriptor systems. AIMS Mathematics, 2022, 7(5): 9039-9056. doi: 10.3934/math.2022503
    [7] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [8] George Rosensteel . Differential geometry of collective models. AIMS Mathematics, 2019, 4(2): 215-230. doi: 10.3934/math.2019.2.215
    [9] Abdullah Ali H. Ahmadini, Firoz Ahmad . Solving intuitionistic fuzzy multiobjective linear programming problem under neutrosophic environment. AIMS Mathematics, 2021, 6(5): 4556-4580. doi: 10.3934/math.2021269
    [10] Young Joon Ahn . G2/C1 Hermite interpolation of offset curves of parametric regular curves. AIMS Mathematics, 2023, 8(12): 31008-31021. doi: 10.3934/math.20231587
  • I construct a new cutting-plane model for approximating nonsmooth nonconvex functions in multiobjective optimization and propose a new bundle-type method with the help of an improvement function. The presented bundle method possesses three features. Firstly, the objective and constraint functions are approximated by a new cutting-plane model, which is a local convexification of the corresponding functions, instead of the entire approximation for the functions, as most bundle methods do. Secondly, the subgradients and values of the objective and constraint functions are computed approximately. In other words, approximate calculation is applied to the method, and the proposed algorithm is doubly approximate to some extent. Thirdly, the introduction of the improvement function eliminates the necessity of employing any scalarization, which is the usual method when dealing with multiobjective optimization. Under reasonable conditions satisfactory convergence results are obtained.



    Rn n-dimensional Euclidean space
    fi the objective function
    c the constraint function
    xT transposed vector xRn
    x,y inner product of xRn and yRn
    || the Euclidean norm in Rn
    f0(x;d) Clarke generalized directional derivative of f at xRn in direction dRn
    f subdifferential of f
    KS(x) contingent cone of S at xRn
    S polar cone of S
    C1 continuous derivatives up to order 1
    Bρ(x) open ball centered at x with radius ρ
    argmin f(x) a point where f has its minimum value

     | Show Table
    DownLoad: CSV

    There exist lots of nonsmooth optimization problems in fields of applications, for example, in economics [1] and mechanical engineering [2]. Sometimes, nonsmooth optimization problems often have several objectives. During the last three decades, rapid development has been characteristic to the areas of nonsmooth and multiobjective optimization [3,4,5,6]. Multiobjective optimization (also known as multicriteria optimization) is an area of multiple criteria decision making, which is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multiobjective optimization has been applied in many fields of science, including engineering, economics and logistics, where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. However, the methods and considerations for nonsmooth multiobjective optimization are much fewer, and there exists an increasing demand to solve efficiently optimization problems with several nonsmooth objective functions.

    Quantum computing and quantum computers play an important role in optimization problems [7,8]. Han, K. H., and J. H. Kim [7] once proposed a novel computing method called a genetic quantum algorithm (GQA), which is based on the concept and principles of quantum computing, such as qubits and superposition of states. The effectiveness and the applicability of the GQA are demonstrated by experimental results on the knapsack problem, which is a well-known combinatorial optimization problem.

    However, bundle methods are among the successful methods for nonsmooth convex optimization problems [9,10,11]. Little systematic work has been done on extending convex bundle methods to a nonconvex case. Most nonconvex bundle methods [12,13,14,15,16,17,18,19] belong to the proximal type, and they follow the idea of previous dual methods by employing "subgradient locality measures" in order to make the linearization errors nonnegative. The introduction of quadratic subgradient locality measures possesses the drawback that penalty parameters have to be fixed a priori. The redistributed proximal bundle algorithms [20,21] are designed to deal with nonconvex functions. The presented cutting-plane model forms certain local convexification centered at the stability center instead of the entire approximation for the corresponding functions, and the linearization errors are assured to be nonnegative by updating the local convexification parameter.

    In some cases, exact information of the objective and constraint functions are expensive and unnecessary. Therefore, the emergence of inexact bundle methods is of necessity. In general, bundle methods which are based on inexact evaluations of the objective and constraint functions are often called inexact or approximate bundle methods. An early work can be found in Kiwiel [22]. Hintermuller [23] deals with inexact subgradients, but the evaluation of the objective function should be exact. Furthermore, uncontrollable inexactness is considered in [24], where they use noise attenuation techniques to cope with inexact oracle derived from a stochastic objective. For the latest unified theory of convex inexact bundle methods, see [25,26,27].

    In this paper, I generalize the redistributed proximal bundle method for single objective nonsmooth optimization [28] to nonsmooth nonconvex multiobjective optimization with the help of an improvement function. I employ the approximate objective and constraint function values and the approximate subgradient values to construct a new cutting-plane model for approximating the nonconvex functions. The model is a local convexification model which overcomes the difficulty caused by the nonconvexity, and it is worth pointing out that the errors need not vanish in the limit, which makes the proposed algorithm widely used. I prove that the proposed method is implementable and can obtain a satisfactory convergence result.

    The rest of this paper is organized as follows: Section 2 presents some basic concepts and results of nonsmooth and multiobjective optimization theory. Section 3 gives the cutting-plane model of a local convexification of the improvement function and describes the concrete redistributed proximal bundle method for nonsmooth multiobjective optimization. Some satisfactory convergence results are obtained in Section 4. Finally, in Section 5 some conclusions are given.

    Consider a nonsmooth nonconvex multiobjective optimization of the form:

    {min{f1(x),f2(x),,fh(x)}s.t.xS, (2.1)

    where S={xRn|c(x)0}, fi:RnR,i=1,2,,h, and c:RnR are locally Lipschitz continuous functions. As in most related literature [29], we only consider the constraint function c as a scalar function since, if multiple inequality constraints appear, constraint function c can be defined as the point-wise maximum of finitely many constraint functions, thus covering the case of multiple inequality constraints. Therefore, there is no loss of generality in formulating problem (2.1) with only one constraint.

    For a locally Lipschitz continuous function f:RnR, the Clarke generalized directional derivative [13] at xRn in the direction dRn is defined by

    fo(x;d)=lim supyxt0f(y+td)f(y)t,

    and the Clarke subdifferential [13] of f at xRn is defined by

    f(x)={ξRn|fo(x;d)ξTd,for alldRn},

    which is a nonempty convex and compact subset of Rn.

    A function f:RnR is fo-pseudoconvex [30] if it is locally Lipschitz continuous, and for all x,yRn,

    f(y)<f(x)impliesfo(x;yx)<0,

    and it is fo-quasiconvex [30] if

    f(y)f(x)impliesfo(x;yx)0.

    A vector xRn is said to be a global Pareto optimum [3] of (2.1) if there does not exist xS such that

    fi(x)fi(x)for alli=1,2,,handfj(x)<fj(x)for somej.

    A vector xRn is said to be a global weak Pareto optimum [3] of (2.1) if there does not exist xS such that

    fi(x)<fi(x)for alli=1,2,,h.

    A vector xRn is said to be a local (weak) Pareto optimum [3] of (2.1) if there exists δ>0 such that xRn is a global (weak) Pareto optimum on Bδ(x)S. Trivially, every Pareto optimal point is weakly Pareto optimal.

    The contingent cone and polar cone [13] of set SRn at xRn are defined respectively as

    KS(x)={dRn|there existti0anddidwithx+tidiS},
    S={dRn|s,d0for allsS}.

    Furthermore, let

    F(x)=hi=1fi(x),G(x)={c(x)|c(x)=0}.

    For the optimality condition we pose the following constraint qualification:

    G(x)KS(x). (2.2)

    The following statements are equivalent [31,32]:

    (i) f is lower-C1 on S.

    (ii) ˉxS,ε>0,ρ>0:xBρ(ˉx)andgf(x), we have

    f(x+u)f(x)+g,uε|u|,

    whenever |u|ρ and x+uBρ(ˉx).

    (iii) ˉxS,ε>0,ρ>0:y1,y2Bρ(ˉx) and g1f(y1),g2f(y2), we have

    g1g2,y1y2ε|y1y2|.

    (iv) f is semismooth and regular on S.

    In this part I generalize the redistributed proximal bundle method for single objective optimization to nonsmooth nonconvex multiobjective optimization. The improvement function is employed to handle nonsmooth multiobjective problems. The introduction of inexactness in the available information makes the proposed algorithm possess extensive applications.

    The improvement function [11] H:Rn×RnR is defined by

    H(x,y)=max{fi(x)fi(y),c(x)|i=1,2,,h}.

    The next theorem reveals the relationship between the improvement function and problem (2.1), and it provides the theoretical foundation for constructing the concrete algorithm.

    Theorem 3.1. [31]A necessary condition for xRn to be a global weak Pareto optimum of (2.1) is that

    x=argminxRnH(x,x). (3.1)

    Moreover, if fi is fo-pseudoconvex for all i=1,2,,h, the constraint function c is fo-quasiconvex, and the constraint qualification (2.2) is valid, then the condition (3.1) is sufficient for xRn to be a global weak Pareto optimum of (2.1).

    Let ˆxkRn be the current stability center, Jki be the index set for the information used for function fi, i=1,2,,h, and Jk be the constraint function c at the kth iteration. We seek the search direction dkRn as a solution to

    minH(ˆxk+d,ˆxk)s.t.dRn. (3.2)

    Suppose that at the kth iteration, besides the current stability center ˆxkRn, we have some auxiliary points xjRn from previous iterations. We have inexact function and subgradient values as follows:

    fji=fi(xj)σj,cj=c(xj)σj,ˆfki=fi(ˆxk)ˆσk,ˆck=c(ˆxk)ˆσk,gjifi(xj)+Bθj(0),gjc(xj)+Bθj(0),i=1,2,,h, (3.3)

    where σj,ˆσk and θj are unknown errors, and error terms σj,ˆσk and θj are assumed to be bounded:

    |σj|ˉσ,|ˆσk|ˉσ,0θj|ˉθ|,for alljandk, (3.4)

    but the error terms themselves and their bounds ˉσ and ˉθ are generally unknown.

    In order to deal with possible nonconvexity of fi,i=1,2,,h, and c, we follow the redistributed proximal approach [20] and construct a new cutting-plane model for the improvement function. Firstly, the convex piecewise linear model function for fi,i=1,2,,h, and c are given respectively by

    maxjJki{fji+gji,ˆxk+dxj+ηki2|xjˆxk|2+ηkixjˆxk,ˆxk+dxj}=ˆfki+maxjJki{akij+skij,d}, (3.5)
    maxjJk{cj+gj,ˆxk+dxj+ηk2|xjˆxk|2+ηkxjˆxk,ˆxk+dxj}=ˆck+maxjJk{akj+skj,d}, (3.6)

    where

    0akij=ekij+bkij,ekij=ˆfkifjigji,ˆxkxj,bkij=ηki2|xjˆxk|2,skij=gji+ηki(xjˆxk),i=1,2,,h,jJki, (3.7)
    0akj=ekj+bkj,ekj=ˆckcjgj,ˆxkxj,bkj=ηk2|xjˆxk|2,skj=gj+ηk(xjˆxk),jJk. (3.8)

    By adjusting dynamically along iterations, parameters ηki,i=1,2,,h, and ηk are taken sufficiently large to make akij,i=1,2,,h,jJki, and akj,jJk, nonnegative. In our redistributed proximal bundle method, we take

    ηkimax{maxjJki,xjˆxk{2ekij|xjˆxk|2},0}+γ:=η1+γ,i=1,2,,h, (3.9)
    ηkmax{maxjJk,xjˆxk{2ekj|xjˆxk|2},0}+γ:=η2+γ, (3.10)

    where γR is a small positive constant. Working with inexact information of the objective function and the constraint function, the cutting-plane model for H(ˆxk+d,ˆxk) is presented by

    ˆHk(ˆxk+d)=max{ˆfki+maxjJki{akij+skij,d}ˆfki,i=1,2,,h;ˆck+maxjJk{akj+skj,d}}=max{maxjJki{akij+skij,d},i=1,2,,h;ˆck+maxjJk{akj+skj,d}}. (3.11)

    We also define the inexact function value of H(ˆxk+d,ˆxk) by

    ˜H(ˆxk+d,ˆxk):=˜Hk(ˆxk+d)=max{ˆfi(ˆxk+d)ˆfki,i=1,2,,h;ˆc(ˆxk+d)}, (3.12)

    where ˆfi(ˆxk+d),i=1,2,,h, and ˆc(ˆxk+d) are approximately evaluated according to (3.3), which will be used to discuss the convergence of the proposed algorithm in Section 3.3. Note that for some J(hi=1Jki)Jk we have ˆxk=xJ. Therefore, bkiJ=bkJ=0, ekiJ=ekJ=0, so akiJ=akJ=0. Hence,

    ˆHk(ˆxk)=max{0,ˆck}. (3.13)

    Obviously, we have

    ˆHk(ˆxk)=˜Hk(ˆxk). (3.14)

    The new search direction dkRn is given by solving the proximal point subproblem

    mindRnˆHk(ˆxk+d)+|d|22tk, (3.15)

    where 0<tkR is an inverse proximal parameter. From the optimality condition of the subproblem above, we obtain

    0ˆHk(xk+1)+dktk, (3.16)

    where xk+1=ˆxk+dk. Since the model (3.11) is piecewise linear, there exist simplicial multipliers

    αkiR|Jki|,αkij0,i=1,2,,h,jJki;αkR|Jk|,αkj0,jJk;hi=1jJkiαkij+jJkαkj=1 (3.17)

    such that

    dk=tkGk,Gk=hi=1jJkiαkijskij+jJkαkjskj. (3.18)

    Once the new iterate is known, we define the aggregate linearization

    Ak(ˆxk+d)=ˆHk(xk+1)+Gk,ddk, (3.19)

    that is,

    Ak(x)=ˆHk(xk+1)+Gk,xxk+1. (3.20)

    Thus, we have

    Ak(xk+1)=ˆHk(xk+1),GkˆHk(xk+1),Gk=Ak(ˆxk+d)for alldRn. (3.21)

    By the subgradient inequality, it holds that

    Ak(ˆxk+d)ˆHk(ˆxk+d),for alldRn. (3.22)

    The aggregate error is defined by

    Ek=ˆHk(ˆxk)ˆHk(xk+1)+Gk,dk(0), (3.23)

    and it has the following equivalent expression:

    Ek=hi=1jJkiαkijakij+jJkαkjakj+|ˆck|=ˆHk(ˆxk)Ak(xk+1)+Gk,dk. (3.24)

    Similarly, for the aggregate linearization, it holds that

    Ak(ˆxk+d)={Ek+Gk,d,ˆck0,2ˆckEk+Gk,d,ˆck0.

    In order to check whether the new iterate provides sufficient decrease or not, the predicted decrease is defined by

    δk=˜Hk(ˆxk)ˆHk(ˆxk)+Ek+tk|Gk|2=Ek+tk|Gk|2. (3.25)

    It follows from (3.23) that

    δk0. (3.26)

    The reason we use δk to stop the algorithm will be clarified by the relations in Theorem 4.1; it has the following equivalent expression:

    δk=˜Hk(ˆxk)ˆHk(ˆxk)+Ek+tk|Gk|2=˜Hk(ˆxk)ˆHk(xk+1)+Gk,dk+tk|Gk|2=˜Hk(ˆxk)ˆHk(xk+1).

    Our assumptions on defining the next model function ˆHk+1 are standard:

    ˆHk+1(ˆxk+d)max{0,ˆck+1}ak+1k+1+sk+1k+1,d,ˆHk+1(ˆxk+d)Ak(ˆxk+d),for alldRn. (3.27)

    A Redistributed Bundle-Type Algorithm (RBTA):

    Step 0 (Initialization) Choose an initial point x1S, compute (f1i,g1i) and (c1,g1), and select m(0,1), γ>0, and a stopping tolerance tol0. Choose parameter t1>0. Set the iteration counter k=1, the index set J1i=J1={1}, ˆf1i:=f1i,ˆc1:=c1,ˆx1:=x1,i=1,2,,h.

    Step 1 (Model Construction and Trial Point Finding) Given the model ˆHk defined by (3.11), compute dk by solving (3.15), and define the associated Gk,Ek and δk by (3.18), (3.23) and (3.25), respectively. Set xk+1=ˆxk+dk. If δktol, stop.

    Step 2 (Descent Test) Compute (fk+1i,gk+1i), i=1,2,,h, and (ck+1,gk+1). If

    ck+1>ˆckmδk,

    then declare a null step and go to Step 3. Otherwise, declare a serious step and set ˆxk+1=xk+1,ˆfk+1=fk+1,ˆck+1=ck+1. Select tk+1>0 and go to Step 4.

    Step 3 (Null Step) Set ˆxk+1=ˆxk,ˆfk+1=ˆfk,ˆck+1=ˆck, and select tk>tk+1>0.

    Step 4 (Bundle Update and Loop) Select the bundle index set Jk+1i,Jk+1,i=1,2,,h, keeping the active elements. Select ηk+1i,ηk+1 as in (3.9) and (3.10), and update the model ˆHk+1 as needed. Increase k by 1 and go to Step 1.

    In this section I adapt the usual rationale of convergence proofs of bundle methods by considering two cases of infinitely many serious steps and finitely many serious steps followed by infinitely many null steps. It is shown that in both cases, the approximate stationarity condition holds under reasonable conditions.

    Lemma 4.1. Suppose the set {hi=1{jJki|αkij>0}{jJk|αkj>0}} is uniformly bounded in k. If Ek0 as k, and {ˆck}0 as k, then

    (i) hi=1jJkiαkij|xjˆxk|+jJkαkj|xjˆxk|0 as k.

    If, in addition, for some subset K{1,2,,}, ˆxkˉx,GkˉG as Kk with the set {hi=1{ηki|kK}}{ηk|kK} bounded, then we have

    (ii) ˉGH(ˉx,ˉx)+2Bˉθ(0).

    If, in addition, Gk0 as Kk, then

    (iii) ˉxRn satisfies the following approximate stationarity condition:

    0H(ˉx,ˉx)+2Bˉθ(0).

    Finally, if in addition, fi,i=1,2,,h, and c are lowerC1, then for each ε>0, there exists ρ>0 such that

    (iv)

    H(y,ˉx)H(ˉx,ˉx)2ˉσ(ˉθ+ε)|yˉx|,yBρ(ˉx).

    Proof. (i) According to the ways of choosing η1 and η2, we have

    akij=ekij+ηki2|xjˆxk|2ekij+η1+γ2|xjˆxk|2γ2|xjˆxk|20.

    Furthermore, it follows from Ek0 that

    0αkijakij(αkij)2akijγ2(αkij|xjˆxk|)20,ask.

    Thus, we obtain αkij|xjˆxk|0 as k for all i=1,2,,h, jJki. Similarly, αkj|xjˆxk|0 as k for all jJk. By the assumption, the sum in item (i) is over a finite set of indices, and each element in the sum tends to zero, so the assertion (i) holds.

    (ii) For each j and i=1,2,,h, choose pji to be the orthogonal projection of gji onto fi(xj) such that |gjipji|θjˉθ and pj to be the orthogonal projection of gj onto c(xj) such that |gjpj|θjˉθ. By (3.7), (3.8) and (3.18), we have

    Gk=hi=1jJkiαkijpji+hi=1jJkiαij(gjipji)+hi=1ηkijJkiαij(xjˆxk)+jJkαkjpj+jJkαkj(gjpj)+ηkjJkαkj(xjˆxk). (4.1)

    Passing onto a further subsequence in the set K if necessary, assumptions ˆxkˉx,GkˉG as Kk with the set {hi=1{ηki|kK}}{ηk|kK} bounded and outer semicontinuity of the Clarke subdifferential imply that

    limk(hi=1jJkiαkijpji+jJkαkjpj)conv{F(ˉx)G(ˉx)}=H(ˉx,ˉx).

    Since the second and the fifth terms in (4.1) are both in Bˉθ(0), the third and the sixth terms tend to zero by item (i), and the assertion of item (ii) follows.

    (iii) The conclusion of item (iii) can be obtained easily from item (ii) if we take ˉG=0.

    (iv) Because fi,i=1,2,,h, and c are lowerC1, by the equivalent statement of lowerC1 functions in Section 2 and (3.3), (3.7), (3.8) and the nonnegativity of bkij and bkj, fixing any ε>0, there exists ρ>0 such that for any yBρ(xj), we have

    fi(y)fi(ˆxk)akij+skij,yˆxkηkixjˆxk,yˆxk+σj+ˆσk(θj+ε)|yxj|, (4.2)
    c(y)c(ˆxk)akj+skj,yˆxkηkxjˆxk,yˆxk+σj+ˆσk(θj+ε)|yxj|. (4.3)

    Taking convex combinations in (4.2) and (4.3) using the simplicial multipliers in (3.17), and using (3.24), we have

    hi=1jJkiαkij[fi(y)fi(ˆxk)]+jJkαkjc(y)jJkαkjc(ˆxk)Ek+ˉGk,yˆxk2ˉσhi=1jJkiαkij(θj+ε)|yxj|jJkαkj(θj+ε)|yxj|. (4.4)

    Passing onto the limit if necessary, using item (i) and ˉG=0 again, we obtain that

    H(y,ˉx)H(ˉx,ˉx)2ˉσ(ˉθ+ε)|yˉx|,yBρ(ˉx),

    where we employ the relation max{A,B}λA+(1λ)B for any λ[0,1] and the definition of the improvement function H(x,y).

    Theorem 4.1. Suppose the RBTA generates an infinite number of bounded serious steps. Then, δk0 as k. Suppose the sequences {ηki},i=1,2,,h, and {ηk} are bounded in k.

    (i) If k=1tk=+, then Ek0 as k, and there exist K{1,2,} and ˉx such that ˆxkˉx,Gk0 as Kk. In particular, if the set {hi=1{jJki|αkij>0}}{jJk|αkj>0} is uniformly bounded in k, then the conclusions of Lemma 4.1 hold.

    (ii) If lim infktk>0, then these assertions hold for all accumulation points ˉx of {ˆxk}.

    Proof. If we take a serious step at the kth iteration, we have

    ˆck+1ˆckmδk, (4.5)

    and since δk0, the sequence {ˆck} is nonincreasing. By the boundedness of ˆxk and ˆσk and the Lipschitz continuity of c, {c(ˆxk)ˆσk} is bounded below, i.e., {ˆck} is bounded below, and we conclude that it converges. It follows from (4.5) that

    0mk=1δkˆc1limkˆck+1. (4.6)

    As a result, δk0 as k. Since δk=Ek+tk|Gk|2, it also holds that

    Ek0andtk|Gk|20ask. (4.7)

    If k=1tk=+, but for some β>0, Gkβ for all k, then (4.6) results in a contradiction. Hence, there exists an index set K{1,2,} such that

    Gk0asKk. (4.8)

    Furthermore, we can take a subsequence if necessary and assume ˆxkˉx. Item (i) is proved. If the set {hi=1{jJki|αkij>0}}{jJk|αkj>0} is uniformly bounded in k, then, from Ek0 as k, the conclusions of Lemma 4.1 hold.

    If lim infktk>0, then the second relation in (4.7) implies that (4.8) holds for K={1,2,}, and thus the same assertions hold for all accumulation points of {ˆxk}.

    The next simple relation we present is crucial for proving the convergence of the proposed algorithm under the case that a finite number of serious steps occurs. Suppose the stability center is ˆxk=ˆx for all k>ˉk, where ˉk is some positive integer number. It follows from (3.8) that

    ak+1k+1+sk+1k+1,xk+1ˆxk=ek+1k+1bk+1k+1+gk+1+ηk+1(xk+1ˆxk),xk+1ˆxk=(ˆckck+1gk+1,ˆxkxk+1)ηk+12|xk+1ˆxk|2+gk+1,xk+1ˆxk+ηk+1xk+1ˆxk,xk+1ˆxk=ck+1ˆck+ηk+12|xk+1ˆxk|2ck+1ˆck. (4.9)

    Theorem 4.2. Suppose that a finite number of serious steps is followed by infinite null steps. Let k be large enough that ˆxk=ˆx for kˉk. Let the sequences {ηki},i=1,2,,h, and {ηk} be bounded in k and lim infktk>0. Then, ˆxkˆx,δk0,Ek0 as k, and there exists K{1,2,} such that Gk0 as Kk. In particular, if the set {hi=1{jJki|αkij>0}}{jJk|αkj>0} is uniformly bounded in k, the conclusions of Lemma 4.1 hold for ˉx=ˆx.

    Proof. Since ˆxk=ˆx for k>ˉk, we have ˆfk=ˆf,ˆck=ˆc. Define the optimal value of subproblem (3.15)

    ψk:=ˆHk(xk+1)+|dk|22tk. (4.10)

    By (3.19) we obtain that

    ψkψk+|dk|22tk=Ak(ˆx)+Gk,dk+|dk|2tk=Ak(ˆx)ˆH(ˆx), (4.11)

    so the sequence {ψk} is bounded. Since

    ψk+1=ˆHk+1(xk+2)+|dk+1|22tk+1Ak(xk+2)+|dk+1|22tk=ˆHk(xk+1)+Gk,xk+2xk+1+|dk+1|22tk=ψk|dk|22tk1tkdk,dk+1dk+|dk+1|22tk=ψk+|dkdk+1|22tkψk, (4.12)

    the sequence {ψk} is increasing; therefore, it converges. Taking into account that tktˉk, it follows that

    |dk+1dk|0,k. (4.13)

    By the definition of δk and the equivalent expression of Ek, we have that

    ˜H(ˆx)=δk+ˆH(ˆx)Ektk|Gk|2=δk+ˆHk(ˆxk+dk); (4.14)

    therefore,

    δk+1=˜H(ˆx)ˆHk+1(ˆx+dk+1). (4.15)

    According to the assumption (3.27),

    ˆHk+1(ˆx+dk+1)max{0,ˆck+1}+ak+1k+1sk+1k+1,dk+1. (4.16)

    Since ˆc=ˆck+1, we add (4.9) to (4.16), and then we have

    mδk+sk+1k+1,dkdk+1max{0,ˆc}ˆHk+1(ˆx+dk+1).

    Note that ˜H(ˆx)=max{0,ˆc}, and combining this relation with (4.15) yields

    0δk+1mδk+sk+1k+1,dkdk+1. (4.17)

    The rest of the proof is very similar to Theorem 7 in [26], so we omit it.

    Remark: According to Theorem 3.1, under the conditions that the objective functions are f0-pseudoconvex, the constraint function is f0-quasiconvex, and the constraint qualification (2.2) is valid, if x satisfies x=argminxRnH(x,x), then x is a global weak Pareto optimum of problem (2.1). Lemma 4.1 assures that under mild conditions x, the cluster point of stability centers, satisfies the following approximate stationary condition: 0H(ˉx,ˉx)+2Bˉθ(0), i.e., x is the approximate global weak Pareto optimum of problem (2.1). Regardless of whether the proposed RBTA generates an infinite number of bounded serious steps or a finite number of serious steps followed by infinite null steps, the conclusions of Lemma 4.1 hold; in other words, the approximate global weak Pareto optimum of problem (2.1) can be obtained.

    I construct a new cutting-plane model for approximating the nonconvex functions in multiobjective optimization and develop a new redistributed proximal bundle algorithm. First and foremost, the algorithm based on the new model generates approximate proximal points, computed using a variation of the algorithm presented in [20], in which proximal points of a special cutting-plane model are used to compute increasingly accurate approximations to the exact proximal points, and I generalize the unconstrained optimization to the constrained case with a Lipschitz constrained function. At the same time, the local convexification model gives new insight on the first-order models from [33]. Secondly, for multiobjective optimization with nonsmooth nonconvex functions, the multiple objective functions are treated individually by employing the improvement function without employing any scalarization, which is the conventional technique and can be found in [34]. Similar multiobjective optimization problems were once studied in [35,36], which introduces an optimization strategy for cutting-plane methods to cope with multiobjective problems without any scalarization procedure, but the presented methods there employ the exact information of the objective and constrained functions. When compared with the results obtained in [34], even though under some generalized convexity assumptions it can be proved to find a globally weak Pareto optimal solution, it requires evaluation of the exact function values without any errors, which will limit the wide applications of the proposed algorithm. Note that in some cases computing the exact function value is not easy, for instance, the Lagrangian relaxation problem: If f is a max-type function of the form f(y)=sup{Fz(y)|zZ}, where each Fz(y) is convex, and Z is an infinite set, then it may be impossible to calculate f(y) since f itself is defined by a minimization problem involving another function F. The assumptions for using approximate subgradients and approximate values of the function are realistic. Also note that the result of convergence is indeed weaker than what can be obtained by other methods [11,15,24]. It is quite natural, as the convex case takes advantage of the corresponding tools (like the subgradient inequality), which are not available in our nonconvex setting.

    There is no conflict of interest.



    [1] J. Outrata, M. Kocvara, J. Zowe, Nonsmooth approach to optimization problems with equilibrium constraints: Theory, applications and numerical results, Springer Science & Business Media, 1998.
    [2] J. J. Moreau, P. D. Panagiotopoulos, G. Strang (Eds.), Topics in nonsmooth mechanics, Basel: Birkhauser Verlag, 1988.
    [3] K. M. Miettinen, Nonlinear multiobjective optimization, Springer Science & Business Media, 1999.
    [4] K. Miettinen, M. M. Makela, Interactive bundle-based method for nondifferentiable multiobjective optimization: NIMBUS, Optimization, 34 (2007), 231–246. https://doi.org/10.1080/02331939508844109 doi: 10.1080/02331939508844109
    [5] H. Mukai, Algorithms for multicriterion optimization, IEEE transactions on automatic control, 25 (1980), 177–186. https://doi.org/10.1109/TAC.1980.1102298 doi: 10.1109/TAC.1980.1102298
    [6] R. E. Steuer, Multiple criteria optimization: Theory, computation and applications, 1986.
    [7] K. H. Han, J. H. Kim, Genetic quantum algorithm and its application to combinatorial optimization problem, In: Proceedings of the 2000 congress on evolutionary computation. CEC00 (Cat. No.00TH8512), 2002. https://doi.org/10.1109/CEC.2000.870809
    [8] L. Banchi, J. Pereira, S. Lloyd, S. Pirandola, Convex optimization of programmable quantum computers, NPJ Quantum Inf., 6 (2020), 42. https://doi.org/10.1038/s41534-020-0268-2 doi: 10.1038/s41534-020-0268-2
    [9] C. Lemaréchal, Lagrangian relaxation, In: Computational combinatorial optimization, Berlin: Springer, 2001. https://doi.org/10.1007/3-540-45586-8_4
    [10] C. Sagastizábal, Divide to conquer: Decomposition methods for energy optimization, Math. Program., 134 (2012), 187–222. https://doi.org/10.1007/s10107-012-0570-7 doi: 10.1007/s10107-012-0570-7
    [11] C. Sagastizábal, M. Solodov, An infeasible bundle method for nonsmooth convex constrained optimization without a penalty function or a filter, SIAM J. Optim., 16 (2005), 146–169. https://doi.org/10.1137/040603875 doi: 10.1137/040603875
    [12] K. C. Kiwiel, A linearization algorithm for nonsmooth minimization, Math. Oper. Res., 10 (1985), 185–194. https://doi.org/10.1287/moor.10.2.185 doi: 10.1287/moor.10.2.185
    [13] M. M. Makela, P. Neittaanmaki, Nonsmooth optimization: Analysis and algorithms with applications to optimal control, World Scientific, 1992.
    [14] K. C. Kiwiel, Restricted step and Levenberg-Marquardt techniques in proximal bundle methods for nonconvex nondifferentiable optimization, SIAM J. Optim., 6 (1996), 227–249. https://doi.org/10.1137/0806013 doi: 10.1137/0806013
    [15] L. Luksan, J. Vlcek, A bundle Newton method for nonsmooth unconstrained minimization, Math. Program., 83 (1998), 373–391. https://doi.org/10.1007/BF02680566 doi: 10.1007/BF02680566
    [16] J. Vlcek, L. Luksan, Globally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization, J. Optimiz. Theory App., 111 (2001), 407–430. https://doi.org/10.1023/A:1011990503369 doi: 10.1023/A:1011990503369
    [17] A. Fudili, M. Gaudioso, G. Giallombardo, Minimizing nonconvex nonsmooth functions via cutting plane and proximity control, SIAM J. Optim., 14 (2003), 743–756. https://doi.org/10.1137/S1052623402411459 doi: 10.1137/S1052623402411459
    [18] A. Fudili, M. Gaudioso, G. Giallombardo, A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization, Optim. Method. Softw., 19 (2004), 89–102. https://doi.org/10.1080/10556780410001648112 doi: 10.1080/10556780410001648112
    [19] N. Haarala, K. Miettinen, M. M. Makela, Globally convergent limited memory bundle method for large-scale nonsmooth optimization, Math. Program., 109 (2007), 181–205. https://doi.org/10.1007/s10107-006-0728-2 doi: 10.1007/s10107-006-0728-2
    [20] W. Hare, C. Lemaréchal, A redistributed proximal bundle method for nonconvex optimization, SIAM J. Optim., 20 (2010), 2442–2473. https://doi.org/10.1137/090754595 doi: 10.1137/090754595
    [21] W. Hare, C. Lemaréchal, M. Solodov, A proximal bundle method for nonsmooth nonconvex functions with inexact information, Comput. Optim. Appl., 63 (2016), 1–28. https://doi.org/10.1007/s10589-015-9762-4 doi: 10.1007/s10589-015-9762-4
    [22] K. C. Kiwiel, An algorithm for nonsmooth convex minimization with errors, Math. Comput., 45 (1985), 173–180. https://doi.org/10.1090/S0025-5718-1985-0790650-5 doi: 10.1090/S0025-5718-1985-0790650-5
    [23] M. Hintermuller, A proximal bundle method based on approximate subgradients, Comput. Optim. Appl., 20 (2001), 245–266. https://doi.org/10.1023/A:1011259017643 doi: 10.1023/A:1011259017643
    [24] W. de. Oliveira, C. Sagastizábal, C. Lemaréchal, Convex proximal bundle methods in depth: A unified analysis for inexact oracles, Math. Program., 148 (2014), 241–27. https://doi.org/10.1007/s10107-014-0809-6 doi: 10.1007/s10107-014-0809-6
    [25] R. Correa, C. Lemaréchal, Convergence of some algorithms for convex minimization, Math. Program., 62 (1993), 261–275. https://doi.org/10.1007/BF01585170 doi: 10.1007/BF01585170
    [26] J. Shen, Z. Q. Xia, L. P. Pang, A proximal bundle method with inexact data for convex nondifferentiable minimization, Nonlinear Anal. Theor., 66 (2007), 2016–2027. https://doi.org/10.1016/j.na.2006.02.039 doi: 10.1016/j.na.2006.02.039
    [27] J. Shen, X. Q. Liu, F. F. Guo, S. X. Wang, An approximate redistributed proximal bundle method with inexact data for minimizing nonsmooth nonconvex functions, Math. Probl. Eng., 2015 (2015), 215310. https://doi.org/10.1155/2015/215310 doi: 10.1155/2015/215310
    [28] W. Hare, C. Lemaréchal, M. Solodov, A proximal bundle method for nonsmooth nonconvex functions with inexact information, Comput. Optim. Appl., 63 (2016), 1–28. https://doi.org/10.1007/s10589-015-9762-4 doi: 10.1007/s10589-015-9762-4
    [29] J. B. Jian, C. M. Tang, L. Shi, A feasible point method with bundle modification for nonsmooth convex constrained optimization, Acta Math. Appl. Sin. Engl. Ser., 34 (2018), 254–273. https://doi.org/10.1007/s10255-018-0755-9 doi: 10.1007/s10255-018-0755-9
    [30] S. Schaible, Quasiconvex, pseudoconvex, and strictly pseudoconvex quadratic functions, J. Optim. Theory Appl., 35 (1981), 303–338. https://doi.org/10.1007/BF00934906 doi: 10.1007/BF00934906
    [31] A. Daniilidis, P. Georgiev, Approximate convexity and submonotonicity. J. Math. Anal. Appl., 291 (2004), 292–301. https://doi.org/10.1016/j.jmaa.2003.11.004 doi: 10.1016/j.jmaa.2003.11.004
    [32] J. E. Springarn, Submonotone subdifferentials of Lipschitz functions, Trans. Amer. Math. Soc., 264 (1981), 77–89. https://doi.org/10.1090/S0002-9947-1981-0597868-8 doi: 10.1090/S0002-9947-1981-0597868-8
    [33] D. Noll, O. Prot, A. Rondepierre, A proximity control algorithm to minimize nonsmooth and nonconvex functions, Pac. J. Optim., 4 (2008), 569–602. https://hal.archives-ouvertes.fr/hal-00464695
    [34] M. M. Makela, N. Karmitsa, O. Wilppu, Proximal bundle method for nonsmooth and nonconvex multiobjective optimization, In: International conference for mathematical modeling and optimization in mechanics (MMOM 2014).
    [35] D. A. G. Vieira, A. C. Lisboa, A cutting-plane method to nonsmooth multiobjective optimization problems, Eur. J. Oper. Res., 275 (2019), 822–829. https://doi.org/10.1016/j.ejor.2018.12.047 doi: 10.1016/j.ejor.2018.12.047
    [36] A. Kabgani, M. Soleimani-damaneh, Semi-quasidifferentiability in nonsmooth nonconvex multiobjective optimization, Eur. J. Oper. Res., 299 (2021), 35–45. https://doi.org/10.1016/j.ejor.2021.10.063 doi: 10.1016/j.ejor.2021.10.063
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2016) PDF downloads(53) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog