This paper deals with the robust strong duality for nonconvex optimization problem with the data uncertainty in constraint. A new weak conjugate function which is abstract convex, is introduced and three kinds of robust dual problems are constructed to the primal optimization problem by employing this weak conjugate function: the robust augmented Lagrange dual, the robust weak Fenchel dual and the robust weak Fenchel-Lagrange dual problem. Characterizations of inequality (1.1) according to robust abstract perturbation weak conjugate duality are established by using the abstract convexity. The results are used to obtain robust strong duality between noncovex uncertain optimization problem and its robust dual problems mentioned above, the optimality conditions for this noncovex uncertain optimization problem are also investigated.
Citation: Yanfei Chai. Robust strong duality for nonconvex optimization problem under data uncertainty in constraint[J]. AIMS Mathematics, 2021, 6(11): 12321-12338. doi: 10.3934/math.2021713
[1] | Koushik Das, Savin Treanţă, Thongchai Botmart . Set-valued minimax programming problems under $ \sigma $-arcwisely connectivity. AIMS Mathematics, 2023, 8(5): 11238-11258. doi: 10.3934/math.2023569 |
[2] | Bao Ma, Yanrong Ma, Jun Ma . Adaptive robust AdaBoost-based kernel-free quadratic surface support vector machine with Universum data. AIMS Mathematics, 2025, 10(4): 8036-8065. doi: 10.3934/math.2025369 |
[3] | Ebenezer Fiifi Emire Atta Mills . The worst-case scenario: robust portfolio optimization with discrete distributions and transaction costs. AIMS Mathematics, 2024, 9(8): 20919-20938. doi: 10.3934/math.20241018 |
[4] | Ziran Yin, Chongyang Liu, Xiaoyu Chen, Jihong Zhang, Jinlong Yuan . A comprehensive characterization of the robust isolated calmness of Ky Fan $ k $-norm regularized convex matrix optimization problems. AIMS Mathematics, 2025, 10(3): 4955-4969. doi: 10.3934/math.2025227 |
[5] | Wei Zhang, Pengcheng Li, Donghe Pei . Circular evolutes and involutes of spacelike framed curves and their duality relations in Minkowski 3-space. AIMS Mathematics, 2024, 9(3): 5688-5707. doi: 10.3934/math.2024276 |
[6] | Gerardo Sánchez Licea . Strong and weak measurable optimal controls. AIMS Mathematics, 2021, 6(5): 4958-4978. doi: 10.3934/math.2021291 |
[7] | Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475 |
[8] | Mingze Zhao, Huilan Li . A pair of dual Hopf algebras on permutations. AIMS Mathematics, 2021, 6(5): 5106-5123. doi: 10.3934/math.2021302 |
[9] | Jun Moon . The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints. AIMS Mathematics, 2025, 10(1): 884-920. doi: 10.3934/math.2025042 |
[10] | Yuna Oh, Jun Moon . The infinite-dimensional Pontryagin maximum principle for optimal control problems of fractional evolution equations with endpoint state constraints. AIMS Mathematics, 2024, 9(3): 6109-6144. doi: 10.3934/math.2024299 |
This paper deals with the robust strong duality for nonconvex optimization problem with the data uncertainty in constraint. A new weak conjugate function which is abstract convex, is introduced and three kinds of robust dual problems are constructed to the primal optimization problem by employing this weak conjugate function: the robust augmented Lagrange dual, the robust weak Fenchel dual and the robust weak Fenchel-Lagrange dual problem. Characterizations of inequality (1.1) according to robust abstract perturbation weak conjugate duality are established by using the abstract convexity. The results are used to obtain robust strong duality between noncovex uncertain optimization problem and its robust dual problems mentioned above, the optimality conditions for this noncovex uncertain optimization problem are also investigated.
Robust optimization problems [4,5,7,8,22,23,29,30,31] and robust dual theory [3,6,10,11,12,13,15,16,18,19,28] have attracted much attention of mathematical researchers. Many of the works in this area were considered convex robust optimization problems, in [6,15] robust Lagrangian strong duality was established in convex optimization and in [16] robust Lagrangian strong duality theorem was given whenever the Lagrangian function is convex. Moreover, duality theory which is based on conjugate function plays an important role in optimization. In convex analysis, dual problem is constructed in terms of conjugate functions by using the well-known Legendre-Fenchel transform. Robust classical conjugate duality was presented for convex optimization problem in [18]. Furthermore, in [13], characterizations of inequality below
p(x)=supv∈VFv(x,0Y)≥l(x) | (1.1) |
in terms of robust abstract perturbational duality were established, where X, Y are locally convex Hausdorff topological vector spaces, V≠∅ is an uncertainty set, FV:X×Y→ˉR=R∪{±∞} for each v∈V, l:X→ˉR is a lower semicontinuous proper convex function. The results were then applied to robust DC and robust convex optimization problems, and strong Fenchel duality and strong Lagrangian duality for these classes of robust problems were also obtained.
It is well known that dual problems constructed by using general augmented Lagrangian functions or weak conjugate functions, and strong duality conditions for noncovex optimization problems were comprehensively studied by researchers[1,2,13,14,17,20,21,25,27,33]. In particular, the conjugate function theory, developed by Azimov and Gasimov in [1], used superlinear functions of the form ⟨x∗,x⟩−c‖x‖ instead of linear functions ⟨x∗,x⟩ used in convex analysis. They extended the usual definition of the subdifferential, using this class of functions, and established duality relations in terms of so-called weak subdifferentiability of the perturbation function associated with the problem under consideration. By using weak conjugate function and weak subdifferential given in [1], Küçük ect. in [17] constructed weak Fenchel conjugate dual problem and weak Fenchel-Lagrange conjugate dual problem, presented necessary and sufficient conditions for the strong duality of the dual problems and nonconvex scalar optimization problem; In [33], the duality scheme and strong duality theorems for nonconvex optimization problem were presented, which are based on the weak conjugate function and the weak subdifferential concept given in [1].
Nevertheless, there are few duality results on nonconvex robust optimization problem in the literature, since it is not only very hard to verify the zero duality gap conditions formulated in terms of perturbation and/or dualizing parameterization functions, but also to derive the conditions formulated in terms of objective and constraint functions. Motivated by [13,17,33], the aim of this paper is to formulate robust dual problems by using the weak conjugate function we introduced (see Definition 2.1) and establish robust strong duality results for nonconvex uncertain optimization problem. Characterization of general inequality (1.1) above with uncertainty is established according to robust perturbational weak conjugate duality, where we only assume the right hand function l in (1.1) is abstract convex [9,24], which covers very broad classes on nonconvex functions. Then the results are used as key tools to obtain the strong duality for the robust augmented Lagrange dual (RDL), robust weak Fenchel dual (RDwF) and the robust weak Fenchel-Lagrange dual problems (RDwLF) which are all defined by using the weak conjugate function, and are also applied to investigate the optimality conditions for nonconvex robust optimization problem.
The paper is organized as follows. In section 2, we recall some notations and introduce some preliminary results which will be used in the rest of paper. In section 3, we construct three types of robust dual problems for the primal optimization problem by using the weak conjugate function and obtain the strong duality respectively by establishing the inequality (1.1) via robust perturbation weak conjugate duality. In section 4, we investigate the relations among the optimal objective values of (RDL), (RDwF), (RDwLF) and the robust optimization (RP) of (UP). Finally, section 5, we present necessary and sufficient optimality conditions for (RDL), (RDwF), R(DwLF) and (RP).
In this section, we introduce the definitions of weak conjugate, weak biconjugate function, weak subdifferentials and some basic theorems and lemmas about these notions.
Throughout this paper, let X, Y be two locally convex vector spaces with their topological dual spaces X∗ and Y∗, endowed with the weak∗ topologies W(X∗,X) and W(Y∗,Y), respectively. Let D⊂Y be a nonempty closed convex cone, the dual cone of D is defined by
D∗={y∗∈Y∗:⟨y∗,y⟩≥0,∀y∈D}, |
where we use the notation ⟨⋅,⋅⟩ for the value of the continuous linear function y∗∈Y∗ and y∈Y. We use the notation R+={x∣x∈R,x≥0}. We also recall the corresponding concepts and results on (extended) real-valued functions. Let f:G→ˉR, g:G→ˉR be functions defined on a set G⊆X, then the inequality f≤g means that f(x)≤g(x) for all x∈G. The domain and the epigraph of f are
domf={x∈X:f(x)<+∞}, |
and
epif={(x,r)∈X×R:x∈domf,f(x)≤r}, |
respectively. The strict epigraph of f:X→ˉR is the set
epi _{s} f={(x,r)∈X×R:x∈domf,f(x)<r}. |
The function f is said to be proper if domf≠∅ and f(x)≠−∞. Let H be a set of functions h:G→ˉR. The set supp(f,H)={h∈H∣h≤f} is called the support set of f with respect to H. The function co H f:G→ˉR defined by co H f(x)=sup{h(x)∣h∈supp(f,H)} is called the H−convex hull of f. A function f:G→ˉR is called abstract convex with respect to H (or H−convex) at a point x∈G if there exist a set U⊆supp(f,H) such that f(x)=sup{h(x)∣h∈H}. It is clear that f is H−convex at x if and only if f(x)=co H f(x). If f is H−convex at each point x∈G, then f is called H−convex on G.
Let L be a set of functions defined on a set G. Functions hl,r of the form hl,r=l(x)−r, x∈G, with l∈L and r∈R are called L−affine. Denoted by HL the set of all L−affine functions. Denoted by ΓX the union of the set of all functions f:G→R∪{+∞} and the function −∞, where −∞(x)=−∞ for all x∈G.
We now introduce the definitions of new weak conjugate and weak biconjugate functions. First we need to have a function σ for the above definitions. It is assumed that σ:Y→R+ is continuous function with the following properties:
σ(0)=0,σ(y)≠0ify≠0. | (2.1) |
Definition 2.1. (a) A function fw:X∗×R+→ˉR defined by
fw(x∗,c)=supx∈X{⟨x∗,x⟩−cσ(x)−f(x)} |
is called the weak conjugate function of f. This function is HX−convex;
(b) The function fww:X→ˉR defined by
fww(x)=sup(x∗,c)∈X∗×R+{⟨x∗,x⟩−cσ(x)−fw(x∗,c)} |
is called the weak biconjugate function of f. This function is HX∗×R+−convex. The classical result of abstract convex analysis states that f∈HX is abstract convex with respect to HX∗×R+ at a point x if and only if f(x)=fww(x).
Remark 2.1. In the definition of weak biconjugate function if r=0, then fw(x∗,0)=f∗(x∗) where f∗(x∗) is the classical conjugate function; if σ(x)=‖x‖, then fw(x∗,c) reduces to the fw(0,x∗,c) in [1].
Definition 2.2. Let X be a locally convex vector space. Let f:X→R be a single valued function and x0∈X be a point with f(x0) is finite. A pair (x∗,c)∈X∗×R+ is called weak subgradient of f at x0 if
f(x)−f(x0)≥⟨x∗,x−x0⟩−cσ(x−x0)forallx∈X. |
The set ∂wf(x0)={(x∗,c)∈X∗×R+∣f(x)−f(x0)≥⟨x∗,x−x0⟩−cσ(x−x0),∀x∈X} of all weak subgradients of f at x0 is called the weak subdifferential of f at x0. If ∂wf(x0)≠∅, then f is called weakly subdifferentiable at x0.
Remark 2.2. If σ(x)=‖x‖, then the definition of 2.2 reduces to the corresponding definition in [1].
Consider the following optimization problem with uncertain parameter in the constraint:
(UP)infx∈Q{f(x)∣g(x,v)∈−D}, |
where f:X→ˉR and g:X×Z→Y are given functions, Z is another locally convex vector space, Q⊂X is a nonempty closed set, v is uncertain parameter and belongs to V⊆Z.
For each v∈V, we denote
Sv={x∈Q∣g(x,v)∈−D}. |
In this paper, robust optimization approach is applied to (UP). Now, we associate with (UP) its robust counterpart
(RP)infx∈Q{f(x)∣g(x,v)∈−D,∀v∈V}. |
We denote the feasible set of (RP) by
S={x∈Q∣g(x,v)∈−D,∀v∈V}=⋂v∈VSv. |
The problem (RP) is called the robust primal problem of (UP). The infimum for problem (RP) is denoted by inf(RP) and every element x∈S such that f(x)=inf(RP) is called a robust solution of (UP) (or a solution of (RP)).
The Lagrange perturbation function of (UP) is F:V×X×Y→ˉR define as follows:
Fv(x,y)={f(x),g(x,v)+y∈−D,x∈Q+∞,otherwise. | (2.2) |
The weak conjugate function of Fv is Fwv:X∗×R+×Y∗×R+→ˉR given by
Fwv(x∗,c,y∗,d)=sup(x,y)∈X×Y{⟨x∗,x⟩−cσ(x)+⟨y∗,y⟩−dσ(y)−Fv(x,y)}=supx∈Qsupy∈−D−g(x,v){⟨x∗,x⟩−cσ(x)+⟨y∗,y⟩−dσ(y)−f(x)} | (2.3) |
for all (x∗,c,y∗,d)∈X∗×R+×Y∗×R+.
Remark 2.3. It follows immediately from the definition of weak biconjugate function, we have
Fwwv(x,0)=sup(x∗,c,y∗,d){⟨x∗,x⟩−cσ(x)+⟨y∗,0⟩−dσ(0)−Fwv(x∗,c,y∗,d)}=sup(x∗,c,y∗,d){⟨x∗,x⟩−cσ(x)−sup(x,y){⟨x∗,x⟩−cσ(x)+⟨y∗,y⟩−dσ(y)−Fv(x,y)}}≤sup(x∗,c,y∗,d){⟨x∗,x⟩−cσ(x)−⟨x∗,x⟩+cσ(x)+Fv(x,0)}=Fv(x,0). |
Remark 2.4. Considering (1.1) and the definition of Fv(x,y), we can conclude that
p(x)={f(x),x∈S,+∞,otherwise. |
Let q:X∗×R+→ˉR be the function defined by
q(x∗,c)=infv∈Vinf(y∗,d)∈Y∗×R+Fwv(x∗,c,y∗,d),∀(x∗,c)∈X∗×R+. |
Let the projection Π:(x∗,c,y∗,d,r)∈X∗×R+×Y∗×R+×R→(x∗,c,r)∈X∗×R+×R, and let
Λ=⋃v∈VΠ(epiFwv) |
Lemma 2.1. Let pw:X∗×R+→ˉR be a weak conjugate function, then pw is lower semicontinuous and convex on X∗×R+.
Proof. By the definition of weak conjugate function, we have
pw(x∗,c)=supx∈X{⟨x∗,x⟩−cσ(x)−p(x)}=supx∈X{⟨x∗,x⟩−(p+cσ)(x)}=(p+cσ)∗(x∗), |
where (ρ+cσ)(x)=ρ(x)+cσ(x), so pw is lower semicontinuous on X∗×R+. Since x∗ is linear function and c is a constant, so convexity is easy to obtain. The proof is complete.
The following lemmas generalize [[13], Lemmas 2.1 and 2.2].
Lemma 2.2. One has
(i) qw=supv∈VFwwv(x,0Y)≤p;
(ii) pw≤qww≤q;
(iii)episp=⋃v∈VΠ(episFwv);
(iv)Λ⊂epiq⊂epipw and ¯coΛ⊂epipw.
Proof. For any x∈X, from the definition of qw one has
qw(x)=sup(x∗,c)∈X∗×R+{⟨x∗,x⟩−cσ(x)−q(x∗,c)}=sup(x∗,c)∈X∗×R+{⟨x∗,x⟩−cσ(x)−infv∈Vinf(y∗,d)∈Y∗×R+Fwv(x∗,c,y∗,d)}=supv∈Vsup(x∗,c)∈X∗×R+(y∗,d)∈Y∗×R+{⟨x∗,x⟩−cσ(x)+⟨y∗,0⟩−dσ(0)−Fwv(x∗,c,y∗,d)}=supv∈VFwwv(x,0Y). | (2.4) |
Since Fwwv(x,0Y)≤Fv(x,0Y)≤p(x) for all v∈V and x∈X, (2.4) yields qw(x)≤p(x) and (i) holds, while (ii) follows from (i).
Proof of (iii). Take (x∗,c,r)∈episq. Then
infv∈V(y∗,d)∈Y∗×R+Fwv(x∗,c,y∗,d)<r, |
which implies there exist ˉv∈V and (ˉy∗,ˉd)∈Y∗×R+ such that
Fwˉv(x∗,c,ˉy∗,ˉd)<r, |
so (x∗,c,ˉy∗,ˉd,r)∈episFwˉv and (x∗,c,r)=Π(episFwˉv)⊂⋃v∈VΠ(episFwˉv), which mean episq⊂⋃v∈VΠ(episFwv).
On the other hand, take (x∗,c,r)∈⋃v∈VΠ(episFwv), then there exist ˉv∈V such that (x∗,c,r)∈Π(episFwˉv). Since Π is surjective, there is (ˉy∗,ˉd)∈Y∗×R+ such that (x∗,c,ˉy∗,ˉd,r)∈episFwˉv, and so (x∗,c,r)∈episq as
q(x∗,c)=infv∈Vinf(y∗,d)∈Y∗×R+Fwv(x∗,c,y∗,d)≤Fwˉv(x∗,c,ˉy∗,ˉd)<r, |
for all (x∗,c)∈X∗×R+. Thus, episq⊃⋃v∈VΠ(episFwv) which, together with the inclusion above, proves that (iii) holds.
Proof of (iv). Since Π is surjective and
q(x∗,c)=infv∈Vinf(y∗,d)∈Y∗×R+Fwv(x∗,c,y∗,d)≤Fwv(x∗,c,ˉy∗,ˉd) |
for all v∈V, (ˉy∗,ˉd)∈Y∗×R+ and (x∗,c)∈X∗×R+, it follows that Λ⊂epiq. By (ii), epiq⊂epipw, and so ¯coΛ⊂epipw.
Lemma 2.3. Assume that there exists ˉx∈X such that supv∈VFwwv(x,0Y)<+∞. Then one has epiqww=¯coΛ. Moreover, the following statements are equivalent:
(i) pww=supv∈VFwwv(⋅,0Y);
(ii) epipw=¯coΛ.
Proof. Observe that qw=supv∈VFwwv(⋅,0Y), and so by assumption, one obtains domqw≠∅. According to [34], epiqww=¯co(epiq) which, together with Lemma 2.1 (iii), implies
¯co(epiq)=¯co(episq)=¯co(⋃v∈VΠepisFwv)=¯co(⋃v∈VΠepiFwv)=¯coΛ. |
For the equivalence of (i) and (ii), note that in light of Lemma 2.1, (i) is equivalent to pww=qw, which means also that qww=pw. The last equality and epiqww=¯coΛ show epipw=epiqww=¯coΛ, which is (ii). The proof is complete.
The aim of this section is to construct three types of robust dual problems for (UP) by using weak conjugate function: the robust augmented Lagrange dual, the robust weak Fenchel dual and the robust weak Fenchel-Lagrange dual problem, to establish characterization of inequality (1.1) according to robust abstract perturbational weak conjugate duality, and finally, by employing these results to obtain robust strong duality results for (UP).
To define an augmented Lagrange function for (UP), we need augmented function σ to be a continuous function with the properties (2.1). For each fixed v∈V, the uncertain augmented Lagrange function associated with (UP) is given by
Lv(x,y∗,d)=infy∈Y{Fv(x,y)−⟨y∗,y⟩+dσ(y)}=infy∈Y{f(x)−⟨y∗,y⟩+dσ(y),g(x,v)+y∈−D,x∈Q+∞,otherwise, |
for x∈X, y∈Y, y∗∈Y∗ and d∈R+, where function Fv(x,y) is defined in (2.2). By using the definition of Fv(x,y), we can concretize the augmented Lagrange associated with (UP)
Lv(x,y∗,d)=infy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}, |
for x∈Q, y∗∈Y∗ and d∈R+.
The uncertain dual function of (UP) is
ϕv(y∗,d)=infx∈QLv(x,y∗,d),for(Y∗,d)∈Y∗×R+. |
Then uncertain augmented Lagrange dual problem of (UP) is defined as
(UDL)sup(y∗,d)∈Y∗×R+ϕv(λ,r). |
The optimistic counterpart of the uncertain augmented Lagrange dual (UDL) is a deterministic maximization problem given by
(RDL)sup(v,y∗,d)∈V×Y∗×R+ϕv(y∗,d)=sup(v,y∗,d)∈V×Y∗×R+infx∈QLv(x,y∗,d)=sup(v,y∗,d)∈V×Y∗×R+infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}. |
Now, when x∗=0X∗ and c=0 in (2.3), the value of the function Fwv(0,0,y∗,d) simply denoted by
Fwv(0,0,y∗,d)=supx∈Qsupy∈−D−g(x,v){⟨y∗,y⟩−dσ(y)−f(x)}. |
Hence,
−Fwv(0,0,y∗,d)=infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}. |
As a result, robust augmented Lagrange dual problem for (UP) with respect to Fv can be given by
(RDL)sup(y∗,d)∈Y∗×R+supv∈V{−Fwv(0,0,y∗,d)}. |
The supremum for problem (RDL) is denoted by sup(RDL) and any element (v,y∗,d)∈V×Y∗×R+ such that −Fwv(0,0,y∗,d)=sup(RDL) is termed as a solution of (RDL).
Theorem 3.1. (Weak duality) sup(RDL)≤inf(RP).
Proof. For arbitrary (v,y∗,d)∈V×Y∗×R+,
−Fwv(0,0,y∗,d)=−supx∈Qsupy∈−D−g(x,v){⟨y∗,y⟩−dσ(y)−f(x)}=infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}≤infx∈SVf(x)≤infx∈Sf(x)=inf(RP), |
so we conclude that sup(RDL)≤inf(RP)
In the following sections we always assume Γ is a set of functions defined on X, H is a set of functions and H≠∅, define ΓH(X)={l∈Γ(x)∣l(x)is H-convex on X}.
Theorem 3.2. (Robust abstract perturbational weak conjugate duality) Consider the following statements:
(a1) epipw=Λ;
(b1) For any l∈ΓH(X), the following assertions are equivalent:
(b11) p(x)≥l(x);
(b12) for all (x∗,c)∈X∗×R+,
there exist (v,y∗,d)∈V×Y∗×R+ such that Fwv(x∗,c,y∗,d)≤lw(x∗,c).
One has (a1)⇔(b1).
Proof. (a1)⇒(b1) Assume that (a1) holds. Let l∈ΓH(x). If (b1) holds, i.e., p≥l, then pw≤lw, and hence for all (x∗,c)∈domlw, (x∗,c,lw(x∗,c))∈epipw. Since (a1) holds, there exist v∈V such that (x∗,c,lw(x∗,c))∈Π(epiFwv), and so there exist (y∗,d)∈Y∗×R+ such that Fwv(x∗,c,y∗,d)≤lw(x∗,c), which implies that (b12) holds.
Conversely, if (b12) holds, then for any (x∗,c)∈domlw, there exist (v,y∗,d)∈V×Y∗×R+ such that Fwv(x∗,c,y∗,d)≤lw(x∗,c). Taking Lemma 2.1 (ii) into account, we get pw(x∗,c)≤q(x∗,c)≤Fwv(x∗,c,y∗,d)≤lw(x∗,c), and hence l=lww≤pww≤p, the "=" above is from l being H-convex.
(b1)⇒(a1). Assume that (b1) holds, we will show that (a1) holds. To this aim, considering Lemma 2.1 (iv) into account, it is sufficient to prove that
epipw⊂Λ. |
Take every (x∗,c,r)∈epipw. Then ⟨x∗,x⟩−cσ(x)−p(x)≤pw(x∗,c)≤r for all x∈X. Now set l(x)=⟨x∗,x⟩−cσ(x)−r, then l∈ΓH(X) and l≤p, so by (b1), there exist (v,y∗,d)∈V×Y∗×R+ such that Fwv(x∗,c,y∗,d)≤lw(x∗,c)≤r. This shows that (x∗,c,r)∈Π(epiFwv) and hence (x∗,c,r)∈Λ.
Remark 3.1. Theorem 3.2 generalizes [[13], Theorem 3.1]. In [13] the authors used the classical conjugate function and assumed the right-side function l(x) of inequality (1.1) is convex lower semicontinuous, whereas Theorem 3.2 in this paper, we employ the weak conjugate function and only assume l(x) is abstract convex, which covers very broad classes on nonconvex function.
Theorem 3.3. (Strong duality) Let epipw=Λ, then there exist (v0,y∗0,d0)∈V×Y∗×R+ such that (v0,y∗0,d0) is a solution of (RDL) and sup(RDL)=inf(RP).
Proof. Let l(x)=infx∈Xp(x), then l(x)=inf(RP) for all x∈S. From assumption epipw=Λ and considering x∗=0X∗, c=0 in Theorem 3.2, then there exist (v0,y∗0,d0)∈V×Y∗×R+ such that
Fwv0(0,0,y∗0,d0)≤lw(0,0)=−inf(RP), |
which is equivalent to
−Fwv0(0,0,y∗0,d0)≥inf(RP), |
considering weak duality of Theorem 3.1, we have
inf(RP)≥sup(RDL)≥−Fwv0(0,0,y∗0,d0)≥inf(RP), |
this yields (v0,y∗0,d0) is a solution of (RDL) and sup(RDL)=inf(RP).
Theorem 3.4. Assume that domp≠∅ and pww=supv∈VFwwv(⋅,0Y). Then the following statements are equivalent:
(f1) Λ=¯coΛ;
(b1) For any ΓH(X), the following assertions are equivalent:
(b11) p(x)≥l(x);
(b12) for all (x∗,c)∈X∗×R+, there exist (v,y∗,d)∈V×Y∗×R+ such that Fwv(x∗,c,y∗,d)≤lw(x∗,c).
Proof. Since pww=supv∈VFwwv(⋅,0Y), Lemma 2.3 and f1 give epipw=¯coΛ=Λ. The conclusion follows from Theorem 3.2.
Remark 3.2. Theorem 3.4 gives an equivalent condition that the set Λ is closed convex set.
Next we give a sufficient condition for pww=supv∈VFwwv(⋅,0).
Proposition 3.1. Assume that for any (x∗,c)∈X∗×R+, there exists ˉv∈V such that ∂yFˉv(⋅,0Y)≠∅ on Sˉv and supx∈S{⟨x∗,x⟩−cσ(x)−f(x)}=supx∈Sˉv{⟨x∗,x⟩−cσ(x)−f(x)}, then pww=supv∈VFwwv(⋅,0).
Proof. Since for any (x∗,c)∈X∗×R+, there exists ˉv∈V such that ∂yFˉv(⋅,0Y)≠∅, so there exist (y∗,d)∈Y∗×R+ such that (y∗,d)∈∂yFˉv(⋅,0Y), which implies
−Fˉv(x,0)≥−Fˉv(x,y)+⟨y∗,y⟩−dσ(y),x∈Sˉv,y∈Y, |
that is
−Fˉv(x,0)≥⟨y∗,y⟩−dσ(y)−Fˉv(x,y),x∈Sˉv,y∈Y. | (3.1) |
Following from (3.1), we obtain
Fwˉv(x∗,c,y∗,d)=sup(x,y)∈X×Y{⟨x∗,x⟩−cσ(x)+⟨y∗,y⟩−dσ(y)−Fˉv(x,y)}≤supx∈X{⟨x∗,x⟩−cσ(x)−Fˉv(x,0)}=supx∈Sˉv{⟨x∗,x⟩−cσ(x)−f(x)} | (3.2) |
Since supx∈S{⟨x∗,x⟩−cσ(x)−f(x)}=supx∈Sˉv{⟨x∗,x⟩−cσ(x)−f(x)}, together with (3.2), we get
Fwˉv(x∗,c,y∗,d)≤supx∈S{⟨x∗,x⟩−cσ(x)−f(x)}=supx∈X{⟨x∗,x⟩−cσ(x)−P(x)}=pw(x∗,c). | (3.3) |
The first equality above follows from Remark 2.1. Taking account into (3.3) and the definition of pww(x), one has
pww(x)=sup(x∗,c)∈X∗×R+{⟨x∗,x⟩−cσ(x)−pw(x∗,c)}≤sup(x∗,c)∈X∗×R+{⟨x∗,x⟩−cσ(x)−Fwˉv(x∗,c,y∗,d)}≤supv∈Vsup(x∗,c,y∗,d)∈X∗×R+×Y∗×R+{⟨x∗,x⟩−cσ(x)−Fwv(x∗,c,y∗,d)}=supv∈VFwwv(x,0)≤pww(x). |
Which implies pww=supv∈VFwwv(⋅,0). The proof is complete.
Remark 3.3. We note that ∂yFˉv(⋅,0Y)≠∅ on Sˉv is a general assumption. In fact, we just need to assume augmented function σ(y) satisfy σ(y)≥‖y‖ for all y∈Y and take (y∗,d)∈Y∗×R+ such that ‖y∗‖≤d, then (3.1) holds from the definition of Fˉv(x,0) and Fˉv(x,y) (see (2.2)), so we can conclude that (y∗,d)∈∂yFˉv(⋅,0Y), which implies ∂yFˉv(⋅,0Y)≠∅.
Remark 3.4. In the proof of the strong duality theorem 3.3, our sufficient condition epipw=Λ is different from the existed conditions. Duality Theorem 11.59 in [26], the condition was supposed dualizing parameterization ϕ(x,y) is level-bounded in x locally uniformly in y, duality theorems in [1,32], conditions were assumed perturbation function h=infxϕ(x,y) is proper and weakly subdifferential at the origin 0∈Y. All these conditions were formulated in terms of dualization parameterization or perturbation functions associated with the given problem.
We recall the assumption epipw=Λ in theorem 3.3, which employ the epigraph of function Fwv defined by (2.3), it is also related to dualization parameterization Fv(x,y), and we know that epipw=¯coΛ is easy to satisfy from Lemma 2.3, Proposition 3.1 and Remark 3.1. Moreover, Theorem 3.4 gives an equivalent condition that the set Λ is closed convex set, so it is worth further exploration to find the sufficient conditions that can ensure the set Λ is not only a closed convex set but also only related to the objective function and the constraint function.
The Fenchel perturbation function of (UP) is F:V×X×X→ˉR defined as
Fv(x,u)={f(x+u),g(x,v)∈−D,x∈Q,+∞,otherwise. |
The weak conjugate function of Fv is Fwv:X∗×R+×X∗×R→R+ defined as
Fwv(x∗,c,u∗,d)=sup(x,u)∈X×X{⟨x∗,x⟩−cσ(x)+⟨u∗,u⟩−dσ(u)−Fv(x,u)}=supx∈Svsupu∈X{⟨x∗,x⟩−cσ(x)+⟨u∗,u⟩−dσ(u)−f(x+u)}=supx∈Svsupγ∈X{⟨x∗,x⟩−cσ(x)+⟨u∗,γ−x⟩−dσ(γ−x)−f(γ)}, |
for x∗,u∗∈X∗ and c,d∈R+, where γ=x+u. By choosing x∗=0X∗, c=0, we have
Fwv(0,0,u∗,d)=supx∈Svsupγ∈X{⟨u∗,γ−x⟩−dσ(γ−x)−f(γ)}. |
Hence, the robust Fenchel dual problem (RDwF) with respect to Fv is defined as
(RDwF)sup(u∗,d)∈X∗×R+supv∈V{−Fwv(0,0,u∗,d)}=sup(u∗,d)∈X∗×R+supv∈Vinfx∈Svinfγ∈X{f(γ)−⟨u∗,γ−x⟩+dσ(γ−x)} |
The supremum for problem (RDF) is denoted by sup(RDF) and any element (v,u∗,d)∈V×X∗×R+ such that −Fwv(0,0,u∗,d)=sup(RDF) is termed as a solution of (RDF).
Remark 3.5. sup(RDwF)≤inf(RP) follows immediately from the definition of Fwv(0,0,u∗,d).
Let the projection Π1:(x∗,c,u∗,d,r)∈X∗×R+×X∗×R+×R→(x∗,c,r)∈X∗×R+×R and let
Λ1=⋃v∈VΠ1(epiFwv) |
Theorem 3.5. (Strong duality) Let epipw=Λ1, then there exists (v0,u∗0,d0)∈V×X∗×R+ such that (v0,u∗0,d0) is a solution of (RDF) and sup(RDF)=inf(RP)
Proof. The proof is similar to that of Theorem 3.3.
The Fenchel-Lagrange perturbation function of (UP) is F:V×X×X×Y→ˉR defined as
Fv(x,u,y)={f(x+u),g(x,v)+y∈−D,x∈Q,+∞,otherwise. |
The weak conjugate function of Fv(x,u,y) is defined as Fwv:X∗×R+×X∗×R+×Y∗×R+→R+:
Fwv(x∗,c,u∗,d,y∗,e)=sup(x,u,y)∈X×X×Y{⟨x∗,x⟩−cσ(x)+⟨u∗,u⟩−dσ(u)+⟨y∗,y⟩−eσ(y)−Fv(x,u,y)}=supx∈Qu∈Xsupy∈−D−g(x,v){⟨x∗,x⟩−cσ(x)+⟨u∗,u⟩−dσ(u)+⟨y∗,y⟩−eσ(y)−f(x+u)}=supx∈Qγ∈Xsupy∈−D−g(x,v){⟨x∗,x⟩−cσ(x)+⟨u∗,γ−x⟩−dσ(γ−x)+⟨y∗,y⟩−eσ(y)−f(γ)}, |
where γ=x+u. By choosing x∗=0X∗, c=0 and d=e, we have
Fwv(0,0,u∗,d,y∗,d)=supx∈Qγ∈Xsupy∈−D−g(x,v){⟨u∗,γ−x⟩−dσ(γ−x)+⟨y∗,y⟩−dσ(y)−f(γ)}. |
Hence, the robust Fenchel-Lagrange dual problem (RDwFL) with respect to Fv is defined as
(RDwFL)sup(u∗,d)∈X∗×R+(y∗,d)∈Y∗×R+supv∈Vinfx∈Qγ∈Xinfy∈−D−g(x,v){−Fwv(0,0,u∗,d,y∗,d)}=sup(u∗,d)∈X∗×R+(y∗,d)∈Y∗×R+supv∈Vinfx∈Qγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)} |
The supremum for problem (RDwFL) is denoted by sup(RDwFL) and any element (v,u∗,d,y∗,d)∈V×X∗×R+×X∗×R+ such that −Fwv(0,0,u∗,d)=sup(RDwFL) is termed as a solution of (RDwFL).
Theorem 3.6. (Weak duality) sup(RDwFL)≤inf(RP).
Proof. For any (v,u∗,d,y∗,d)∈V×X∗×R+×Y∗×R+,
−Fwv(0,0,(v,u∗,d,y∗,d))=infx∈Qγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)}≤infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}≤infx∈SVf(x)≤infx∈Sf(x)=inf(RP), |
so we conclude that sup(RDL)≤inf(RP)
Let the projection Π2:(x∗,c,u∗,d,y∗,e,r)∈X∗×R+×X∗×R+×Y∗×R+×R→(x∗,c,r)∈X∗×R+×R and let
Λ2=⋃v∈VΠ2(epiFwv). |
Theorem 3.7. (Strong duality) Let epipw=Λ2, then there exists (v,u∗,d,y∗,d)∈V×X∗×R+×Y∗×R+ such that (v,u∗,d,y∗,d) is a solution of (RDwFL) and inf(RP)=sup(RDwFL).
Proof. The proof is similar to that of Theorem 3.3.
In this section, we examine the relations among the objective values of dual problems (RDL), (RDwF) and (RDwFL).
Proposition 4.1. The inequality sup(DwFL)≤sup(RDL) holds.
Proof. Let (v,u∗,y∗,d) be an arbitrary element of V×X∗×Y∗×R+. It is known that
infx∈Qinfγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)}≤infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}≤sup(v,y∗,d)∈V×Y∗×R+infx∈Qinfy∈−D−g(x,v){f(x)−⟨y∗,y⟩+dσ(y)}=sup(RDL). |
As (v,u∗,y∗,d)∈V×X∗×Y∗×R+ is arbitrary element, we get
sup(DwFL)=sup(v,u∗,y∗,d)infx∈Qinfγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)}≤sup(RDL). |
Proposition 4.2. The inequality sup(RDwFL)≤sup(RDwF) holds.
Proof. Let (v,u∗,y∗,d) be an arbitrary element of V×X∗×Y∗×R+. It is known that 0∈−D−g(x,v) for all x∈Sv, so we have
infx∈Qγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)}≤infx∈Svγ∈Xinfy∈−D−g(x,v){f(γ)−⟨u∗,γ−x⟩−⟨y∗,y⟩+dσ(γ−x)+dσ(y)}≤infx∈Svγ∈X{f(γ)−⟨u∗,γ−x⟩+dσ(γ−x)}≤sup(v,u∗,d)∈V×X∗×R+infx∈Svγ∈X{f(γ)−⟨u∗,γ−x⟩+dσ(γ−x)}=sup(RDwF). |
Hence, taking the supremum in both sides over (v,u∗,y∗,d)∈V×X∗×Y∗×R+, we get
sup(RDwFL)≤sup(RDwF). |
This completes the proof.
Proposition 4.3. Let Λ2⋂({0X∗}×{0}×R+)=epipw⋂({0X∗}×{0}×R+), then sup(RDwFL)=sup(RDwF)=sup(RDL)=inf(RP).
Proof. Under these assumptions and considering Theorem 3.7, it is known that sup(RDwFL)=inf(RP). By Propositions 4.1 and 4.2, we obtain
sup(RDwFL)=sup(RDwF)=sup(RDL)=inf(RP). |
Now, we present an example of robust optimization problem which prove the relationships between the optimal values of the three proposed dual problems.
Example 1. Consider the following one-dimensional optimization with data uncertainty in constraint:
(UP)infx∈Q{f(x)∣g(x,v)=vx≤0}, |
where f:R→R is defined as
f(x)={−|x|,x≤1−1,x>1. |
for all x∈R and Q=R, the data v∈[−1,1] is uncertain.
In this example, we always assume function defined in (2.1) is σ(x)=|x|. Let us calculate the weak conjugate function of Lagrange perturbation Fv(x,u) with x∗=c=0. If 0<v≤1, then Fwv(0,0,y∗,d)=+∞, if −1≤v≤0,
Fwv(0,0,y∗,d)={+∞,z∗−d>0orz∗+d<0,orz∗+d>0and1+(z∗+d)v>0u∗−c+1,u∗−c≥0. |
then we have sup(RDL)=−1.
Let us calculate the weak conjugate function of Fenchel perturbation Fv(x,u) with x∗=c=0. If 0<v≤1, then Fwv(0,0,u∗,d)=+∞, otherwise if −1≤v≤0,
Fwv(0,0,u∗,d)={+∞,u∗+c+1<0oru∗>cu∗−c+1,u∗−c≥0. |
Then we obtain sup(RDwF)=−1.
We also get sup(RDwFL)=−∞. So we obtain −∞=sup(RDwFL)<sup(RDwF)=sup(RDL)=−1.
Remark 4.1. Consider Example 1, let v≤0, from [[17], Example1], we know the classical conjugate function of Fenchel perturbation Fv(x,y) with x∗=0 is F∗v(0,y∗)=+∞, so we can conclude that the optimal value of robust Fenchel dual problem in classical sense (denoted by sup(RD∗F)) is
sup(RD∗F)=supv∈[−1,0]supy∗∈Y∗−F∗v(0,y∗)=−∞<sup(RDwF)=−1, |
which shows that weak conjugate function is more likely to guarantee zero dual gaps than classical conjugate function.
In this section, we give optimality conditions for (RDwF), (DwFL) and (RP).
Theorem 5.1. Let Λ1⋂({0X∗}×{0}×R+)=epipw⋂({0X∗}×{0}×R+).
(a) If ˉx is a solution of (RP), then there exists a solution (v0,u∗0,d0)∈V×X∗×R+ such that
(i) f(ˉx)=infx∈Sv0infγ∈X{f(γ)−⟨u∗0,γ−x⟩+d0σ(γ−x)},
(ii) (u∗0,d0)∈∂wf(ˉx).
(b) Conversely, if ˉx∈S and (v0,u∗0,d0)∈V×X∗×R+ satisfies conditions (i) and (ii), then ˉx is a solution of (RP) and (v0,u∗0,d0) is a solution of (RDwF).
Proof. (a) Let Λ1⋂({0X∗}×{0}×R+)=epipw⋂({0X∗}×{0}×R+). Theorem 3.5 ensures the existence of an optimal solution (v0,u∗0,d0)∈V×X∗×R+ of (RDwF) and sup(DwF)=inf(RP). Since ˉx is a solution of (RP), so we get
f(ˉx)=infx∈Sv0infγ∈X{f(γ)−⟨u∗0,γ−x⟩+d0σ(γ−x)}, |
which means condition (i) is satisfied.
From (i) we have
f(ˉx)=infx∈Sv0infγ∈X{f(γ)−⟨u∗0,γ−x⟩+d0σ(γ−x)}≤f(γ)−⟨u∗0,γ−ˉx⟩+d0σ(γ−ˉx),∀γ∈X |
which implies (u∗0,d0)∈∂wf(ˉx). Therefore, (ii) is satisfied.
(b) Let (i) be satisfied. Then
inf(RP)≤f(ˉx)=infx∈Sv0infγ∈X{f(γ)−⟨u∗0,γ−x⟩+d0σ(γ−x)}≤sup(v,u∗,d)∈V×X∗×R+infx∈Sv0infγ∈X{f(γ)−⟨u∗0,γ−x⟩+d0σ(γ−x)}=sup(RDwF). |
Considering the above formula and weak duality, it is easy to get the following,
inf(RP)=sup(DwF)=f(ˉx)=infx∈Sv0infγ∈X{f(r)−⟨u∗0,γ−x⟩+d0σ(γ−x)}, |
that is, ˉx and (v0,u∗0,d0) are solutions of (RP) and (RDwF) respectively.
Theorem 5.2. Let Λ2⋂({0X∗}×{0}×R+)=epipw⋂({0X∗}×{0}×R+).
(a) If ˉx is a solution of (RP), then there exist a solution (v0,u∗0,d0,y∗0,d0)∈V×X∗×R+×Y∗×R+ such that
(i) f(ˉx)=infx∈Sv0γ∈Xinfy∈−D−g(x,v0){f(γ)−⟨u∗0,γ−x⟩−⟨y∗0,y⟩+d0σ(γ−x)+d0σ(y)};
(ii) (u∗0,d0)∈∂wf(ˉx);
(iii) infy∈−D−g(ˉx,v0){d0σ(y)−⟨y∗0,y⟩}=0.
(b) Conversely, if ˉx∈S and (v0,u∗0,d0,y∗0,d0)∈V×X∗×R+×Y∗×R+ satisfy conditions (i)-(iii), then ˉx is solution of of (RP) and (v0,u∗0,d0,y∗0,d0) is solution of (DwFL).
Proof. (a) Let Λ2⋂({0X∗}⋂{0}×R+)=epipw⋂({0X∗}×{0}×R+). Theorem 3.7 guarantees the existence of an optimal solution (v0,u∗0,d0,y∗0,d0)∈V×X∗×R+×Y∗×R+ of (RDwFL) and sup(DwFL)=inf(RP). Since ˉx is a solution of (RP), then
f(ˉx)=infx∈Sv0γ∈Xinfy∈−D−g(x,v0){f(γ)−⟨u∗0,γ−x⟩−⟨y∗0,y⟩+d0σ(γ−x)+d0σ(y)}. |
Therefore, (i) is satisfied.
As x∈Sv0 implies 0Y∈−D−g(x,v0), so infy∈−D−⋃x∈Sv0g(x,v0){d0σ(y)−⟨y∗0,y⟩}≤0, which means
f(ˉx)=infx∈Sv0γ∈Xinfy∈−D−g(x,v0){f(γ)−⟨u∗0,γ−x⟩−⟨y∗0,y⟩+d0σ(γ−x)+dσ(y)}≤f(γ)−⟨u∗0,γ−ˉx⟩+d0σ(γ−ˉx),∀γ∈X, |
this implies (u∗0,d0)∈∂wf(ˉx), that is (ii) holds.
Considering (i) we also have
f(ˉx)=infx∈Sv0γ∈Xinfy∈−D−g(x,v0){f(γ)−⟨u∗0,γ−x⟩−⟨y∗0,y⟩+d0σ(γ−x)+d0σ(y)}≤f(ˉx)+infy∈−D−g(ˉx,v0){d0σ(y)−⟨y∗0,y⟩}, |
which implies infy∈−D−g(ˉx,v0){d0σ(y)−⟨y∗0,y⟩}≥0. It is obvious that infy∈−D−g(ˉx,v0){d0σ(y)−⟨y∗0,y⟩}≤0 for ˉx∈Sv0, so we get infy∈−D−g(ˉx,v0){d0σ(y)−⟨y∗0,y⟩}=0, the relation (iii) is proved.
The proof of (b) is similar to that of Theorem 5.1 (b).
This paper deals with the robust strong duality for nonconvex optimization problem with the data uncertainty in constraint. We introduce a weak conjugate function and construct three kinds of robust dual problems for primal problem by employing this weak conjugate function, then we establish the robust strong duality between noncovex uncertain optimization problem and its robust dual problems. In particular, we note that the optimal value of robust conjugate dual problem in classical sense is less than that in weak conjugate sense (see Remark 4.1), which shows that weak conjugate function is more likely to guarantee zero dual gaps than classical conjugate function.
The research was supported by the Education department of Shaanxi province (17JK0330). The author is deeply grateful to the referees for their valuable comments and helpful suggestions.
The author declares no conflict of interest.
[1] | A. Y. Azimov, R. N. Gasimov, On weak conjugacy, weak subdifferentials and duality with zero gap in nonconvex optimization, Int. J. Appl. Math., 1 (1999), 171-192. |
[2] | E. J. Balder, An extension of duality-stability relations to nonconvex optimization problems, SIAM J. Control Optim., 15 (2006), 329-343. |
[3] | A. Beck, A. Ben-Tal, Duality in robust optimization: Primal worst equals dual best, Oper. Res. Lett., 37 (2009), 1-6. |
[4] | A. Ben-Tal, S. Boyd, A. Nemirovski, Extending scope of robust optimization: Comprehensive robust counterparts of uncertain problems, Math. Program., 107 (2006), 63-89. |
[5] | D. Bertsimas, D. Brown, Constructing uncertainty sets for robust linear optimization, Oper. Res., 57 (2009), 1483-1495. |
[6] | R. I. Boţ, V. Jeyakumar, G. Y. Li, Robust duality in parametric convex optimization, Set-Valued Var. Anal., 21 (2013), 177-189. |
[7] | A. Ben-Tal, A. Nemirovski, Selected topics in robust convex optimization, Math Program. Ser. B, 112 (2008), 125-158. |
[8] | D. Bertsimas, D. Pachamanova, M. Sim, Robust linear optimization under general norms, Oper. Res. Lett., 32 (2004), 510-516. |
[9] | R. S. Burachik, A. Rubinov, Abstract convexity and augmented Lagrangians, SIAM J. Optim., 18 (2007), 413-436. |
[10] | T. D. Chuong, Optimality and duality for robust multiobjectiv optimization problems, Nonlnear Anal., 134 (2016), 127-143. |
[11] | N. Dinh, M. A. Goberna, D. H. Long, New Farkas-Type results for vector-valued functions: A non-abstract approach, J. Optim. Theory Appl., 182 (2018), 4-29. |
[12] |
N. Dinh, D. H. Long, Complete characterizations of robust strong duality for robust vecter optimization problems, Vietnam J. Math., 46 (2018), 293-328. doi: 10.1007/s10013-018-0283-1
![]() |
[13] |
N. Dinh, T. H. Mo, G. Vallet, M. Volle, A unified approach to robust Farkas-Type results with applications to robust optimization problems, SIAM J. Optim., 27 (2017), 1075-1101. doi: 10.1137/16M1067925
![]() |
[14] |
C. J. Goh, X. Q. Yang, Nonlinear Lagrangian theory for nonconvex optimization, J. Optim. Theory Appl., 109 (2001), 99-121. doi: 10.1023/A:1017513905271
![]() |
[15] |
V. Jeyakumar, G. Y. Li, Strong duality in robust convex programming: Complete characterizations, SIAM J. Optim., 20 (2010), 3384-3407. doi: 10.1137/100791841
![]() |
[16] |
V. Jeyakumar, G. Y. Li, G. M. Lee, Robust duality for generalized convex programming problems under data uncertainty, Nonlinear Anal. Theory Methods Appl., 75 (2012), 1362-1373. doi: 10.1016/j.na.2011.04.006
![]() |
[17] |
Y. Küçük, İ. Atasever, M. Küçük, Weak Fenchel and weak Fenchel-Lagrange conjugate duality for nonconvex scalar optimization problems, J. Glob. Optim., 54 (2012), 813-830. doi: 10.1007/s10898-011-9794-y
![]() |
[18] |
G. Y. Li, V. Jeyakumar, G. M. Lee, Robust conjugate duality for convex optimization under uncertainty with application to data classication, Nonlnear Anal., 74 (2011), 2327-2341. doi: 10.1016/j.na.2010.11.036
![]() |
[19] |
J. H. Lee, G. M. Lee, On optimality conditions and duality theorems for robust semi-infinite multiobjective optimization problems, Ann. Oper. Res., 269 (2018), 419-438. doi: 10.1007/s10479-016-2363-5
![]() |
[20] |
S. K. Mishra, M. Jaiswal, L. T. H. An, Duality for nonsmooth semi-infinite programming problems, Optim. Lett., 6 (2012), 261-271. doi: 10.1007/s11590-010-0240-8
![]() |
[21] |
Y. Pandey, S. K. Mishra, Duality for nonsmooth optimization problems with equilibrium constraints, using convexificators, J. Optim. Theory Appl., 171 (2016), 694-707. doi: 10.1007/s10957-016-0885-2
![]() |
[22] | S. Qu, H. Cai, D. Xu, N. Mohamed, Correction to: Uncertainty in the prediction and management of CO2 emissions: A robust minimum entropy approach, Nat. Hazards, 2021. |
[23] | S. J. Qu, Y. M. Li, Y. Ji, The mixed integer robust maximum expert consensus models for large-scale GDM under uncertainty circumstances, Appl. Soft. Comput., 2021. |
[24] | A. M. Rubinov, Abstract convexity and global optimization, Dordrecht: Kluwer Academic, 2000. |
[25] | A. M. Rubinov, R. N. Gasimov, The nonlinear and augmented Lagrangians for nonconvex optimization problems with a single constraint, Appl. Comput. Math., 1 (2002), 142-157. |
[26] | R. T. Rockafellar, R. J. B. Wets, Variational analysis, Berlin: Springer, 1998. |
[27] | M. F. Sahin, A. Eftekhari, A. Alacaoglu, F. Latorre, V. Cevher, An inexact augmented Lagrangian framework for nonconvex optimization with nonlinear constraints, 2019. Available from: https://arXiv.org/abs/1906.11357. |
[28] |
X. K. Sun, Z. Y. Peng, X. L. Guo, Some characterizations of robust optimal solutions for uncertain convex optimization problems, Optim. Lett., 10 (2016), 1463-1478. doi: 10.1007/s11590-015-0946-8
![]() |
[29] |
X. K. Sun, K. L. Teo, X. J. Long, Characterizations of robust ε-quasi optimal solutions for nonsmooth optimization problems with uncertain data, Optimization, 70 (2021), 847-870. doi: 10.1080/02331934.2021.1871730
![]() |
[30] |
X. K. Sun, K. L. Teo, L. P. Tang, Dual approaches to characterize Robust optimal solution sets for a class of uncertain optimization problems, J. Optim. Theory Appl., 182 (2019), 984-1000. doi: 10.1007/s10957-019-01496-w
![]() |
[31] |
X. K. Sun, K. L. Teo, J. Zeng, L. Y. Liu, Robust approximate optimal solutions for nonlinear semi-infinite programming with uncertainty, Optimization, 69 (2020), 2109-2129. doi: 10.1080/02331934.2020.1763990
![]() |
[32] |
X. Q. Yang, X. X Huang, A nonlinear Lagrangian approach to constrained optimization problems, SIAM J. Optim., 11 (2001), 1119-1144. doi: 10.1137/S1052623400371806
![]() |
[33] |
G. D. Yalcin, R. Kasimbeyli, On weak conjugacy, augmented Lagrangians and duality in nonconvex optimization, Math. Methods Oper. Res., 92 (2020), 199-228. doi: 10.1007/s00186-020-00708-8
![]() |
[34] | C. Zǎlinescu, Convex analysis in general vector spaces, World Scientific, 2002. |