
Citation: Habibe Sadeghi, Fatemeh Moslemi. A multiple objective programming approach to linear bilevel multi-follower programming[J]. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
[1] | B. El-Sobky, G. Ashry . An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307 |
[2] | M Junaid Basha, S Nandhini . A fuzzy based solution to multiple objective LPP. AIMS Mathematics, 2023, 8(4): 7714-7730. doi: 10.3934/math.2023387 |
[3] | B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882 |
[4] | Jiaxin Chen, Yuxuan Wu, Shuai Huang, Pei Wang . Multi-objective optimization for AGV energy efficient scheduling problem with customer satisfaction. AIMS Mathematics, 2023, 8(9): 20097-20124. doi: 10.3934/math.20231024 |
[5] | B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan . A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems. AIMS Mathematics, 2024, 9(3): 6528-6554. doi: 10.3934/math.2024318 |
[6] | Muhammad Akram, Syed Muhammad Umer Shah, Mohammed M. Ali Al-Shamiri, S. A. Edalatpanah . Extended DEA method for solving multi-objective transportation problem with Fermatean fuzzy sets. AIMS Mathematics, 2023, 8(1): 924-961. doi: 10.3934/math.2023045 |
[7] | Sema Akin Bas, Beyza Ahlatcioglu Ozkok . An iterative approach for the solution of fully fuzzy linear fractional programming problems via fuzzy multi-objective optimization. AIMS Mathematics, 2024, 9(6): 15361-15384. doi: 10.3934/math.2024746 |
[8] | Shima Soleimani Manesh, Mansour Saraj, Mahmood Alizadeh, Maryam Momeni . On robust weakly $ \varepsilon $-efficient solutions for multi-objective fractional programming problems under data uncertainty. AIMS Mathematics, 2022, 7(2): 2331-2347. doi: 10.3934/math.2022132 |
[9] | Habibe Sadeghi, Maryam Esmaeili . A modification of the trilevel Kth-Best algorithm. AIMS Mathematics, 2018, 3(4): 524-538. doi: 10.3934/Math.2018.4.524 |
[10] | Shima Soleimani Manesh, Mansour Saraj, Mahmood Alizadeh, Maryam Momeni . Correction: On robust weakly $ \varepsilon $-efficient solutions for multi-objective fractional programming problems under data uncertainty. AIMS Mathematics, 2022, 7(5): 7462-7463. doi: 10.3934/math.2022417 |
Multilevel programming (MLP) is developed to model the decentralized decision-making situations wherein decision makers are arranged within a hierarchical structure. In a MLP problem, when only two levels exist, the problem is referred to as the bilevel programming (BLP) problem. In actual, BLP problems involve two nested optimization problems where the constraint region of first-level problem contains the second-level optimization problem. In this model, a decision maker at the first-level is termed as a leader, and at the second-level, a follower. BLP problems have many applications in the real world, such as those in supply chain management [14], planning [16], and logistics [19]. Most of the research concerning MLP have focused on bilevel programming (BLP) [1,5]. In this paper, we consider linear bilevel problems with one leader and multiple followers. This model is called linear bilevel multi-follower programming (LBLMFP) problem. Shi et al. extended the Kth-Best algorithm to solve LBLMFP problems [21]. Several researchers developed heuristic algorithms to the BLP problems, for instance, see [10]. Lu et al, applied the extended Kuhn-Tucker approach to solve LBLMFP problem [13]. Also, one approach is developed to solve the LBLMFP problems based on multi-parametric programming [7]. In addition, some researchers have exploited multiple objective programming techniques to solve BLP problems [9,18]. These kinds of solution approaches are developed based on some relationships between BLP and multiple objective programming that were first presented by F¨ul¨op [8]. For example, Glackin et al. presented an algorithm to solve linear bilevel programming (LBLP) problems based on multiple objective linear programming (MOLP) techniques [9].
In this paper, Motivated by the relationship between LBLP and MOLP, a relationship between a class of LBLMFP problems and MOLP problems is introduced for the first time. Moreover, we present an algorithm for solving LBLMFP problems based on the proposed relation.
The paper is organized as follows: In the next section, we present the basic definitions, and some notions about LBLMFP problems and MOLP problems. In Section 3, we introduce two MOLP problems such that each feasible solution for the LBLMFP problem is efficient for both of the two MOLP problems. Next, we obtain results, based on which the LBLMFP problem can be reduced to optimize the first-level objective function over a certain efficient set. Based on this result, a new algorithm is developed in Section 4 for solving the LBLMFP problem. Furthermore, we obtain some results for special cases. Section 5 presents a number of numerical examples to illustrate the proposed algorithm. Finally, in Section 6, we present the conclusions.
In this section, we present the formulation of LBLMFP problem which we shall investigate, accompanied by basic definitions. In addition, we recall some notions and theoretical properties about MOLP problems.
In this article, we consider a linear bilevel programming (LBLP) problem with two followers. There are individual decision variables in separate objective functions and constraints between the followers. Each follower takes other followers' decision variables as a reference. This is called a reference-uncooperative LBLMFP problem [23]. This model can be formulated as follows:
minx1∈X1F(x1,x2,x3)=(c11)Tx1+(c12)Tx2+(c13)Tx3,minx2∈X2f1(x2,x3)=(c22)Tx2+(c23)Tx3,s.t.A1x1+A2x2+A3x3≤b1,minx3∈X3f2(x2,x3)=(c32)Tx2+(c33)Tx3,s.t.B1x1+B2x2+B3x3≤b2, |
where xi∈Xi⊂Rni+,i=1,2,3, F:X1×X2×X3→R, and fk:X2×X3→R,k=1,2, cij, for i,j∈{1,2,3}, are the vectors of conformal dimension, and for each i∈{1,2,3}, Ai is a m×ni matrix, Bi is a q×ni matrix, b1∈Rm, and b2∈Rq. Also, although xi≥0, for i=1,2,3, do not explicitly appear in this problem, we assume that they exist in the set of constraints.
Notice that, the followers' objective functions are linear in x2 and x3 and for each follower, the value for the variable x1 is given. Thus, a problem equivalent to the LBLMFP problem is obtained if the followers' objective functions are replaced by 3∑j=1(cij)Txj, for i = 2, 3.
We need to introduce the following definitions that can be found in [23].
(1) Constraint region of the LBLMFP problem:
S:={(x1,x2,x3)∈X1×X2×X3:A1x1+A2x2+A3x3≤b1,B1x1+B2x2+B3x3≤b2}. |
We suppose that S is non-empty and compact.
(2) Projection of S onto the leader's decision space:
S(X1):={x1∈X1:∃(x2,x3)∈X2×X3,(x1,x2,x3)∈S}. |
(3) Feasible sets for the first and second followers, respectively:
S1(x1,x3):={x2∈X2:(x1,x2,x3)∈S}, |
S2(x1,x2):={x3∈X3:(x1,x2,x3)∈S}. |
The feasible region of each follower is affected by the leader's choice of x1, and the other followers' decisions.
(4) The first and second followers' rational reaction sets, respectively:
P1(x1,x3):={x2:x2∈argmin[f1(ˆx2,x3):ˆx2∈S1(x1,x3)]}, |
where
argmin[f1(ˆx2,x3):ˆx2∈S1(x1,x3)]={x2∈S1(x1,x3):f1(x2,x3)≤f1(ˆx2,x3),∀ˆx2∈S1(x1,x3)}, |
and
P2(x1,x2):={x3:x3∈argmin[f2(x2,ˆx3):ˆx3∈S2(x1,x2)]}, |
where
argmin[f2(x2,ˆx3):ˆx3∈S2(x1,x2)]={x3∈S2(x1,x2):f2(x2,x3)≤f2(x2,ˆx3),∀ˆx3∈S2(x1,x2)}. |
(5) Inducible region (IR):
IR:={(x1,x2,x3)∈S:x2∈P1(x1,x3),x3∈P2(x1,x2)}. |
In actual, the inducible region is the set of feasible solutions of the LBLMFP problem. Therefore, determining the solution for the LBLMFP problem is equivalent to solving the following problem:
min{F(x1,x2,x3):(x1,x2,x3)∈IR}. | (2.1) |
In order to assure that (2.1) has an optimal solution [23], we supposed that the following assumptions hold:
(1) IR is non-empty.
(2) P1(x1,x3) and P2(x1,x2) are point-to-point mappings with respect to (x1,x3) and (x1,x2), respectively. In other words, for each leader's choice of x1, there will be a unique solution to each follower.
Assume that p≥2 is an integer and ci∈Rn,i=1,2,⋯,p are row vectors. Let C be a p×n matrix whose i-th row is given by ci,i=1,2,⋯,p, and U is a non-empty, compact and convex polyhedral set in Rn. A multiple objective linear programming (MOLP) problem is formulated in general as follows:
min{Cx:x∈U},(MOLP) |
where U is called feasible region.
Definition 1. [22] A feasible point, ˉx∈U, is called an efficient solution if there exists no x∈U such that Cx⩽Cˉx and Cx≠Cˉx.
An efficient solution is also called a Pareto-optimal solution.
Let E denote the set of all efficient solutions of the MOLP problem. Note that E≠∅ [[6], Theorem 2.19].
Let d∈Rn. Consider the following mathematical programming problem:
min{dTx:x∈E}. | (2.2) |
This problem is a non-convex linear optimization problem over the efficient set E of the MOLP problem. Let E∗ denote the set of optimal solutions to the problem (2.2) and Uex denote the set of extreme points of the polyhedron U. Because U is a non-empty, compact and convex polyhedron in Rn, Uex is non-empty [[3], Theorem 2.6.5]. So, the following result holds:
Theorem 1. Let U be a non-empty, compact, and convex polyhedral set, and let E∗ be non-empty. Then, at least one element of E∗ is an extreme point of U [[4], Theorem 4.5].
By considering the relation E⊆U, we immediately get the following corollary to Theorem 1.
Corollary 1. Let E∗≠∅. Then, there is an extreme point of E, which is an optimal solution to the problem (2.2).
From Theorem 3.40 in [6], because U is a non-empty, compact and convex polyhedron in Rn, and the objective functions cix,i=1,2,⋯,p are convex, we can conclude that the efficient set E is connected.
Moreover, it is well-known that the efficient set E, is comprised of a finite union of faces of U [6]. Therefore, we obtain the following corollary:
Corollary 2. The efficient set E is comprised of a finite union of connected faces of the polyhedron U.
The Corollary 2 obviously implies that the efficient set E is closed.
Also, the following definition is used in sequel:
Definition 2. [22] Let U⊆Rn, and x1,x2,…,xr∈U. The notation γ(x1,x2,…,xr) denotes the set of all convex combinations of x1,x2,…,xr.
In this section, we will introduce two MOLP problems in such a way that any extreme efficient solution for both problems simultaneously, is an extreme feasible solution (in other words, an extreme point of IR) to LBLMFP problem.
Note that ˉx=(ˉx1,ˉx2,ˉx3)∈IR, i.e., ˉx2∈P1(ˉx1,ˉx3) and ˉx3∈P2(ˉx1,ˉx2). Therefore, ˉx2 is an optimal solution for the following linear program (LP):
minx2∈X2f1(x2,ˉx3)=(c22)Tx2+(c23)Tˉx3, | (3.1) |
s.t.A2x2⩽b1−A1ˉx1−A3ˉx3, | (3.2) |
and ˉx3 is an optimal solution for the following LP:
minx3∈X3f2(ˉx2,x3)=(c32)Tˉx2+(c33)Tx3, | (3.3) |
s.t.B3x3⩽b2−B1ˉx1−B2ˉx2. | (3.4) |
Now, set n:=3∑i=1ni. We construct augmented matrix Aa1=[A1A3]. Let rank(Aa1)=r1. Without loss of generality, we assume that augmented matrix Aa1 can be rewritten as [ˉA1ˉA3ˆA1ˆA3], where [ˉA1ˉA3] is r1×(n1+n3) matrix and rank([ˉA1ˉA3])=rank(Aa1)=r1. [Notice that the augmented matrix Aa1 contains an (n1+n3)×(n1+n3) identity submatrix corresponding to non-negativity constraints. So, r1=n1+n3]. Similarly, we construct augmented matrix Ba1=[B1B2]. Let rank(Ba1)=r2 and augmented matrix Ba1 can be rewritten as [ˉB1ˉB2ˆB1ˆB2], where [ˉB1ˉB2] is r2×(n1+n2) matrix and rank([ˉB1ˉB2])=rank(Ba1)=r2. Let ki=ri+2,i=1,2 and the ki×n, criterion matrices Ci,i=1,2 be defined as follows:
C1:=[ˉA1OˉA3−eTˉA10−eTˉA30(c22)T0],C2:=[ˉB1ˉB2O−eTˉB1−eTˉB2000(c33)T], | (3.5) |
where O and 0, are zero matrix and zero vectors of conformal dimension, respectively. Also, e is a vector having each entry equal to 1.
Next, consider the following two MOLP problems:
min{Cix:x∈S},i=1,2.(MOLPi) |
Let Ei be the set of efficient solutions of the MOLPi, i=1,2. We have the following result:
Proposition 1. Let IR and E1 be defined as above. Then, IR⊆E1.
Proof. Assume that ˉx=(ˉx1,ˉx2,ˉx3)∈IR. It suffices to show that ˉx is an efficient solution for the MOLP1. Let us suppose the contrary, i.e., there exists x=(x1,x2,x3)∈S such that C1x≤C1ˉx and C1x≠C1ˉx. Because C1x≤C1ˉx, using the structure of matrix C1, one obtains:
ˉA1x1+ˉA3x3≤ˉA1ˉx1+ˉA3ˉx3, | (3.6) |
−eTˉA1x1−eTˉA3x3≤−eTˉA1ˉx1−eTˉA3ˉx3. | (3.7) |
From inequality (3.6), one obtains:
−eTˉA1x1−eTˉA3x3≥−eTˉA1ˉx1−eTˉA3ˉx3. | (3.8) |
The last two inequalities imply that:
eTˉA1x1+eTˉA3x3=eTˉA1ˉx1+eTˉA3ˉx3. | (3.9) |
From equality (3.9) and inequality (3.6), it is easy to see that ˉA1x1+ˉA3x3=ˉA1ˉx1+ˉA3ˉx3. So, we have:
A1x1+A3x3=A1ˉx1+A3ˉx3. | (3.10) |
Obviously, x2 is a feasible solution for the problem (3.1)- (3.2). Moreover, due to C1x≠C1ˉx and in view of equalities (3.9) and (3.10), we deduce that (c22)Tx2<(c22)Tˉx2 which contradicts ˉx2 is an optimal solution for the problem (3.1)– (3.2). This completes the proof.
The proof of the following proposition is similar to that of Proposition 1, and we omit it.
Proposition 2. Let IR and E2 be defined as above. Then, IR⊆E2.
Now, if we set E:=E1∩E2, from Propositions 1 and 2, we can get:
Remark 1. Let IR,E1, and E2 be defined as above. Then, IR⊆E1∩E2=E, and regarding to Assumption (1), E is a non-empty set.
Therefore, this allows us to prove the main result of this section.
Theorem 2. The extreme points of IR and E are identical.
Proof. Let x=(x1,x2,x3)∈IRex be arbitrary. Then, x is an extreme point of S [[23], Corollary 4.9]. Because IR⊆E, we get x∈E. Therefore, x is an extreme point of E (Note that E⊆S). Since x∈IRex was chosen arbitrarily, we obtain IRex⊆Eex. Now, we will show that Eex⊆IRex. Let x=(x1,x2,x3)∈Eex be chosen arbitrarly. There are two cases to be considered here.
Case 1: x∉IR. In this case, we conclude from the definition of IR that at least one of the following two subcases should occur:
Subcase 1: x2∉P1(x1,x3). It follows from Assumption (2) that there exists ˆx2∈P1(x1,x3), so that f1(ˆx2,x3)<f1(x2,x3). As a consequence, we get:
(c22)Tˆx2<(c22)Tx2. | (3.11) |
Set ˆx=(x1,ˆx2,x3). By definition of P1(x1,x3), ˆx∈S. Due to the structure of matrix C1 and inequality (3.11), one has C1ˆx⩽C1x,C1ˆx≠C1x that contradicts x∈E⊆E1.
Subcase 2: x3∉P2(x1,x2). By Assumption (2), there exists ˆx3∈P2(x1,x2) so that f2(x2,ˆx3)<f2(x2,x3). As a consequence, we have:
(c33)Tˆx3<(c33)Tx3. | (3.12) |
Set ˆx=(x1,x2,ˆx3). By definition of P2(x1,x2), ˆx∈S. Due to the structure of matrix C2 and inequality (3.12), one has C2ˆx⩽C2x,C2ˆx≠C2x, which contradicts x∈E⊆E2.
Case 2: x∈IR. Since x∈Eex and IR⊆E (see Remark 1), we conclude that x∈IRex. This completes the proof.
Now, let us consider the following problem:
min{F(x1,x2,x3)=(c11)Tx1+(c12)Tx2+(c13)Tx3:(x1,x2,x3)∈E}. | (3.13) |
Remark 2. Note that Corollary 2 obviously implies that the efficient set E is closed. Also, since E is a closed subset of the compact polyhedral set of S, itself is a compact set. Therefore, problem (3.13) involves the optimization of a linear function over a compact set. Hence, there exists an optimal solution to the problem (3.13), i.e., E∗≠∅. Then, according to the Corollary 1, there is an extreme point of E, which is an optimal solution for the problem (3.13).
The relation between LBLMFP problem and problem (3.13) is stated in the following theorem.
Theorem 3. A point ˉx=(ˉx1,ˉx2,ˉx3)∈IRex is an optimal solution to LBLMFP problem if, and only if, it is an optimal solution to the problem (3.13).
Proof. It is well-known that solving LBLMFP problem is equivalent to solving problem (2.1). Furthermore, there exists an extreme point of IR, which is an optimal solution for problem (2.1) [23]. Because the optimal solution of problem (3.13) occurs at an extreme point of E and the extreme points of E and IR are the same, we conclude that if ˉx is an optimal solution for problem (3.13), it is an optimal solution for LBLMFP problem, and vice versa. This completes the proof.
In the next section, based on Theorem 3, we propose a new algorithm to solve LBLMFP problem.
Owing to the preceding discussion, we will propose a new algorithm for solving LBLMFP problems. First, one can solve two MOLPi, i=1,2 in order to obtain the sets Ei,i=1,2. Let E be the set of efficient points that have been discovered to be efficient solutions to both problems. Then, one can minimize the leader objective function over the set S. If the obtained extreme optimal solution lies in the set E, it solves the LBLMFP problem. Otherwise, one can solve problem (3.13). Then, by Theorem 3, the obtained extreme optimal solution solves the LBLMFP problem. The algorithm can be described as follows:
The Algorithm:
Step 1. Construct the MOLPi, i=1,2.
Step 2. Find efficient sets Ei with respect to the MOLPi, i=1,2.
∙ For instance, approaches presented in [6,17,20,22] can be used in Step 2.
Step 3. Set E=E1∩E2.
Step 4. Solve the following LP:
min{F(x1,x2,x3)=(c11)Tx1+(c12)Tx2+(c13)Tx3:(x1,x2,x3)∈S}. | (4.1) |
Let x∗ be an optimal solution (One can use the common known solution methods for LPs in [2]). If x∗∈E, stop. Then, x∗ is an optimal solution to LBLMFP problem. Otherwise, go to Step 5.
Step 5. Find an optimal solution to problem (3.13) using, for instance, approaches developed in [4,11,12,15]. Let x∗ be an optimal solution for problem (3.13). Then, it is an optimal solution to LBLMFP problem, too.
∙ Notice that, by Remark 2, there exists an extreme point of E which is an optimal solution for problem (3.13). Hence, in Step 5, for a few number of variables and extreme points, we can find an optimal solution by picking the minimum objective function value among all extreme points of E.
Note that because the efficient set E is not convex in general, the problem (3.13) is a non-convex optimization problem. Thus, in Step 5, we face a non-convex optimization problem that is generally difficult to solve. Moreover, the algorithm stops by obtaining an efficient optimal solution. The Step 4 solves the LP (4.1) and checks whether x∗∈E. If x∗∈E, the algorithm stops at Step 4, and does not enter Step 5. Therefore, Step 4 is necessary in this algorithm to reduce the computations in some cases.
Prior to applying the proposed algorithm for solving numerical examples, we will state the following lemma:
Lemma 1. Let E be defined as above. Then E⊆convIR, where convIR be the smallest convex set containing IR.
Proof. Let ˆx∈E be chosen arbitrarly. Suppose the contrary, that is ˆx∉convIR. Because IR is closed [[23], Theorem 4.9], convIR is a convex and closed set. Therefore, there exists a non-zero vector P [[3], Theorem 2.4.4], such that PTˆx<PTy, for all y∈convIR. Since S is a compact set and convIR is a closed subset of S, convIR is a compact set as well. Furthermore, because y∈convIR, according to the Representation Theorem [3], we have y=n∑i=1λixi,n∑i=1λi=1,0⩽λi⩽1,xi∈(convIR)ex. On the other hand, (convIR)ex=IRex=Eex, thus,
PTˆx<PT(n∑i=1λixi),n∑i=1λi=1,0⩽λi⩽1,xi∈Eex. | (4.2) |
Now, consider the problem min{PTx:x∈E}. By Remark 2, there exists xj∈Eex, which is an optimal solution for this problem. Then, PTxj⩽PTx, for all x∈E. Because ˆx∈E, we have:
PTxj⩽PTˆx. | (4.3) |
Since xj∈Eex, it follows from (4.2) that PTˆx<PTxj, which contradicts (4.3). Hence, ˆx∈convIR.
Note that the efficient set E and the inducible region IR are not convex sets, generally [1,5,6,22]. Up to now, we have proved that IR⊆E⊆convIR. In the following, we show that these inclusions convert to equality in some special cases, as convexity.
Corollary 3. If the efficient set E is a convex set, then E=convIR.
Proof. By the convexity of E and from Remark 1, we obtain convIR⊆E. Also, according to Lemma 1, we have E⊆convIR. Then, E=convIR.
Corollary 4. If the inducible region IR is a convex set, then E=IR.
Proof. According to Remark 1, IR⊆E. Because IR is a convex set, convIR=IR. It follows from Lemma 1 that E⊆IR. Then, E=IR.
In this section, we apply the proposed algorithm for solving some numerical examples.
Example 1. Consider the following LBLMFP problem:
minx1F(x1,x2,x3)=3x1+x2+x3,minx2f1(x2,x3)=−x2+x3,s.t.x1+x2≤8,x1+4x2≥8,x1+2x2≤13,7x1−2x2≥0,x1≥0,x2≥0,x3≥0,minx3f2(x2,x3)=2x3,s.t.x1≥0,x2≥0,0≤x3≤2. |
In this problem, S={(x1,x2,x3):x1+x2≤8,x1+4x2≥8,x1+2x2≤13,7x1−2x2≥0,x1≥0,x2≥0,0≤x3≤2}, and the extreme points of S are as follows:
a1=(8,0,0),a2=(8,0,2),a3=(3,5,0),a4=(3,5,2),a5=(138,9116,0),a6=(138,9116,2),a7=(815,2815,0),a8=(815,2815,2). |
By [7], one can obtain:
IR={(x1,x2,x3):815≤x1≤138, x2=72x1, x3=0}∪{(x1,x2,x3): 138≤x1≤3, x2=132−12x1, x3=0}∪{(x1,x2,x3):3≤x1≤8, x2=8−x1, x3=0}. |
The Figure 1 displays S and IR. The inducible region IR is denoted by the hatched lines.
Note that in this example, S and IR are non-empty and S is compact. Also, one can obtain:
P1(x1,x3)={(x1,x2,x3)∈S: x2=72x1,815≤x1≤138}∪{(x1,x2,x3)∈S: x2=132−12x1,138≤x1≤3}∪{(x1,x2,x3)∈S: x2=8−x1,3≤x1≤8},P2(x1,x2)={(x1,x2,x3)∈S: x3=0}. |
Hence, P1(x1,x3)≠∅, P2(x1,x2)≠∅, and P1(x1,x3) and P2(x1,x2) are point-to-point maps with respect to (x1,x3) and (x1,x2), respectively. Therefore, the Assumptions (1) and (2) hold, and we can solve this problem by the proposed algorithm. Using the proposed algorithm, the process is as follows:
Step 1. The MOLPi problems, for i=1,2 are constructed as follows, respectively:
min{(x1,x3,−x1−x3,−x2):(x1,x2,x3)∈S},min{(x1,x2,−x1−x2,2x3):(x1,x2,x3)∈S}. |
Step 2. In order to find the sets Ei,i=1,2, using the approach described in [20], one can obtain:
E1=γ(a1,a3,a4,a2)∪γ(a3,a5,a6,a4)∪γ(a5,a7,a8,a6),E2=γ(a1,a3,a5,a7). |
The sets Ei,i=1,2 are shown in Figure 2 (a) and (b), by the hatched regions and gray area, respectively.
Step 3. Set E=E1∩E2. We obtain E=γ(a1,a3)∪γ(a3,a5)∪γ(a5,a7)=IR which is shown in Figure 1 by the hatched lines.
Step 4. Solve the following LP:
min{3x1+x2+x3:(x1,x2,x3)∈S}. |
The optimal solution is x∗=(815,2815,0). Because x∗∈E, stop. We deduce that x∗ is an optimal solution for this problem. Also, we solve this example using the multi-parametric approach [7]. We obtain x∗=(815,2815,0) which is equal with the obtained optimal solution using the proposed algorithm.
Example 2. Consider the following LBLMFP problem:
minx1F(x1,x2,x3)=3x1+x2−x3,minx2f1(x2,x3)=2x2−3x3,s.t.x1+x2≥1,x1≥0,x2≥0,x3≥0,minx3f2(x2,x3)=−4x2+x3,s.t.2x1+x2+x3⩽5,x1≥0,x2≥0,x3≥0. |
In this problem, S={(x1,x2,x3): 2x1+x2+x3⩽5,x1+x2≥1,x1≥0,x2≥0,x3≥0}, and the extreme points of S are as follows:
a1=(2.5,0,0),a2=(1,0,0),a3=(0,1,0),a4=(0,5,0),a5=(0,1,4),a6=(1,0,3). |
By [7], one can obtain:
IR={(x1,x2,x3): 0≤x1≤1, x2=1−x1, x3=0}∪{(x1,x2,x3): 1≤x1≤52, x2=0, x3=0}. |
The sets S and IR are drawn in Figure 3. The inducible region IR is denoted by the hatched lines.
Note that in this example, S and IR are non-empty and S is compact. Also, one can obtain:
P1(x1,x3)={(x1,x2,x3)∈S: x2=1−x1,0≤x1≤1}∪{(x1,x2,x3)∈S: x2=0,1≤x1≤52−12x3},P2(x1,x2)={(x1,x2,x3)∈S: x3=0}. |
Hence, P1(x1,x3)≠∅, P2(x1,x2)≠∅, and P1(x1,x3) and P2(x1,x2) are point-to-point maps with respect to (x1,x3) and (x1,x2), respectively. Therefore, the Assumptions (1) and (2) hold, and we can solve this problem by the proposed algorithm. Applying the proposed algorithm to this example, we have:
Step 1. The MOLPi problems, for i=1,2 are constructed as follows, respectively:
min{(x1,x3,−x1−x3,2x2):(x1,x2,x3)∈S},min{(x1,x2,−x1−x2,x3):(x1,x2,x3)∈S}. |
Step 2. One can obtain the sets Ei,i=1,2, using the approach described in [20]:
E1=γ(a1,a2,a6)∪γ(a2,a3,a5,a6),E2=γ(a1,a2,a3,a4). |
The sets Ei,i=1,2 are drawn in Figure 4 (a) and (b) by the hatched regions and gray area, respectively.
Step 3. Set E=E1∩E2. So, we obtain E=γ(a1,a2)∪γ(a2,a3). Here, E coincides with IR which is shown in Figure 3 by the hatched lines.
Step 4. Solve the following LP:
min{3x1+x2−x3:(x1,x2,x3)∈S}. |
The optimal solution is x∗=(0,1,4). Because x∗∉E, go to step 5.
Step 5. Solve the following problem:
min{3x1+x2−x3:(x1,x2,x3)∈E}. |
This problem is a non-convex optimization problem. According to Remark 2, we just consider the extreme points of E. Then, we conclude that the point x∗=(0,1,0) is an optimal solution. Also, one can solve this example using the multi-parametric approach and obtain x∗=(0,1,0). The optimal solution is equal with the obtained optimal solution using the proposed algorithm.
Example 3. Consider the following LBLMFP problem:
minx1F(x1,x2,x3)=−x1−4x2,minx2f1(x2,x3)=3x2−2x3,s.t.x1+x2⩽2,x1≥0,x2≥0,minx3f2(x2,x3)=−x2+4x3,s.t.x1≥0,x2≥0,2⩽x3⩽4. |
In this problem, S={(x1,x2,x3): x1+x2⩽2,x1≥0,x2≥0,2⩽x3⩽4}, and the extreme points of S are as follows:
a1=(2,0,2),a2=(0,0,2),a3=(0,2,2),a4=(0,2,4),a5=(0,0,4),a6=(2,0,4). |
By [7], one can obtain:
IR={(x1,x2,x3): 0≤x1≤2, x2=0, x3=2}. |
The Figure 5 displays constraint region S and inducible region IR. The inducible region is denoted by the hatched line. Note that in this example, S and IR are non-empty and S is compact. Also, one can obtain:
P1(x1,x3)={(x1,x2,x3)∈S: x2=0},P2(x1,x2)={(x1,x2,x3)∈S: x3=2}. |
Hence, P1(x1,x3)≠∅, P2(x1,x2)≠∅, and P1(x1,x3) and P2(x1,x2) are point-to-point maps with respect to (x1,x3) and (x1,x2), respectively. Therefore, the Assumptions (1) and (2) hold, and we can solve this problem by the proposed algorithm. Applying the proposed algorithm to this example, we have:
Step 1. The MOLPi problems, for i=1,2 are constructed as follows, respectively:
min{(x1,x3,−x1−x3,3x2):(x1,x2,x3)∈S},min{(x1,x2,−x1−x2,4x3):(x1,x2,x3)∈S}. |
Step 2. One can obtain the sets Ei,i=1,2, using the approach described in [20]:
E1=γ(a1,a2,a5,a6),E2=γ(a1,a2,a3). |
The sets Ei,i=1,2 are drawn in Figure 6 (a) and (b) by the gray areas, respectively.
Step 3. Set E=E1∩E2. So, we obtain E=γ(a1,a2)=IR. It is shown in Figure 5 by the hatched line.
Step 4. Solve the following LP:
min{−x1−4x2:(x1,x2,x3)∈S}. |
The extreme optimal solutions are x∗=(0,2,2) and y∗=(0,2,4). Because x∗,y∗∉E, go to Step 5.
Step 5. Solve the following problem:
min{−x1−4x2:(x1,x2,x3)∈E}. |
Since E is convex, this problem is the linear programming problem. An optimal solution is x∗=(2,0,2). Also, one can solve this example using the multi-parametric approach and obtain x∗=(2,0,2). The obtained optimal solution is equal with the obtained optimal solution using the proposed algorithm.
In this paper, we have presented a relation between a class of LBLMFP problems and multi-criteria optimization for the first time. We have shown how to construct two MOLP problems so that the extreme points of the set of efficient solutions for both problems are the same as those of the set of feasible solutions for the LBLMFP problem. It is proved that solving the given LBLMFP problem is equivalent to optimizing the leader objective function over a certain efficient set. Based on this result, we proposed an algorithm for solving the LBLMFP problem, and we also showed that it can be simplified in some special cases. Further studies are being conducted to improve the performance of the proposed algorithm and to extend to other classes of LBLMFP problems.
All authors declare no conflicts of interest in this paper.
[1] | J. F. Bard, Practical Bilevel Optimization, Kluwer Academic, Dordrecht, 1998. |
[2] | M. S. Bazaraa, J. J. Jarvis, H. D. Sherali, Linear Programming and Network Flows, 2nd ed., Wiley, New York, 1977. |
[3] | M. S. Bazaraa, H. D. Sherali, C. M. Shetty, Nonlinear Programming, Theory and Algorithms, 3nd ed., Wiley, New York, 2006. |
[4] |
H. P. Benson, Optimization over the efficient set, J. Math. Anal. Appl., 98 (1984), 562-580. doi: 10.1016/0022-247X(84)90269-5
![]() |
[5] | S. Dempe, Foundations of Bilevel Programming, Kluwer Academic, Dordrecht 2003. |
[6] | M. Ehrgott, Multicriteria Optimization, 2nd ed., Berlin, Germany: Springer-Verlag, 2005. |
[7] |
N. P. Faísca, P. M. Saraiva, B. Rustem, et al. A multi-parametric programming approach for multilevel hierarchical and decentralized optimisation problems, Computer Management Sciences, 6 (2009), 377-397. doi: 10.1007/s10287-007-0062-z
![]() |
[8] | J. Fülöp, On the equivalency between a linear bilevel programming problem and linear optimization over the efficient set, Technical Report WP 93-1, Laboratory of Operations Research and Decision Systems, Computer and Automation Institute, Hungarian Academy of Sciences, 1993. |
[9] |
J. Glackin, J. G. Ecker, M. Kupferschmid, Solving bilevel linear programs using multiple objective linear programming, J. Optimiz. Theory App., 140 (2009), 197-212. doi: 10.1007/s10957-008-9467-2
![]() |
[10] | J. Han, G. Zhang, Y. Hu, et al. A solution to bi/tri-level programming problems using particle swarm optimization, Inform. Sciences, 370 (2016), 519-537. |
[11] |
R. Horst, N. V. Thoai, Y. Yamamoto, et al. On optimization over the efficient set in linear multicriteria programming, J. Optimiz. Theory App., 134 (2007), 433-443. doi: 10.1007/s10957-007-9219-8
![]() |
[12] |
J. Jorge, A bilinear algorithm for optimizing a linear function over the efficient set of multiple objective linear programming problem, J. Global Optim., 31 (2005), 1-16. doi: 10.1007/s10898-003-3784-7
![]() |
[13] |
J. Lu, C. Shi, G. Zhang, et al. Model and extended Kuhn-Tucker approach for bilevel multi-follower decision making in a referential-uncooperative situation, J. Global Optim., 38 (2007), 597-608. doi: 10.1007/s10898-006-9098-9
![]() |
[14] |
W. Ma, M. Wang, X. Zhu, Improved particle swarm optimization based approach for bilevel programming problem-an application on supply chain model, Int. J. Mach. Learn. Cyb., 5 (2014), 281-292. doi: 10.1007/s13042-013-0167-3
![]() |
[15] | B. Metev, Multiobjective optimization methods help to minimize a function over the efficient set, Cybernetics and Information Technologies, 7 (2007), 22-28. |
[16] |
C. Miao, G. Du, R. J. Jiao, et al. Coordinated optimisation of platform-driven product line planning by bilevel programming, Int. J. Prod. Res., 55 (2017), 3808-3831. doi: 10.1080/00207543.2017.1294770
![]() |
[17] |
C. O. Pieume, L. P. Fotso, P. Siarry, An approach for finding efficient points in multiobjective linear programming, Journal of Information and Optimization Sciences, 29 (2008), 203-216. doi: 10.1080/02522667.2008.10699800
![]() |
[18] |
C. O. Pieume, L. P. Fotso, P. Siarry, Solving bilevel programming problems with multicriteria optimization techniques, Opsearch., 46 (2009), 169-183. doi: 10.1007/s12597-009-0011-4
![]() |
[19] |
A. S. Safaei, S. Farsad, M. M. Paydar, Robust bi-level optimization of relief logistics operations, Appl. Math. Model., 56 (2018), 359-380. doi: 10.1016/j.apm.2017.12.003
![]() |
[20] |
S. Sayin, An algorithm based on facial decomposition for finding the efficient set in multiple objective linear programming, Oper. Res. Lett., 19 (1996), 87-94. doi: 10.1016/0167-6377(95)00046-1
![]() |
[21] |
C. Shi, G. Zhang, J. Lu, A Kth-best Approach for Linear Bilevel Multi-follower Programming, J. Global Optim., 33 (2005), 563-578. doi: 10.1007/s10898-004-7739-4
![]() |
[22] | R. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, New York, 1986. |
[23] | G. Zhang, J. Lu, Y. Gao, Multi-Level Decision Making: Models, Methods and Applications, Springer-Verlag Berlin AN, 2016. |
1. | Fatemeh Moslemi, Habibeh Sadeghi, On solving uncooperative linear bilevel multi-follower programming problems, 2022, 2220-5810, 1, 10.18187/pjsor.v18i1.3261 | |
2. | Ben Tian, Zhaogang Xie, Wei Peng, Jun Liu, Application Analysis of Multi-Intelligence Optimization Decision-Making Method in College Students’ Ideological and Political Education System, 2022, 2022, 1939-0122, 1, 10.1155/2022/8999757 | |
3. | Francisco Salas-Molina, David Pla-Santamaria, Maria Luisa Vercher-Ferrandiz, Ana Garcia-Bernabeu, New decision rules under strict uncertainty and a general distance-based approach, 2023, 8, 2473-6988, 13257, 10.3934/math.2023670 | |
4. | Olumide F. Odeyinka, Oluwafemi Ipinnimo, Folorunso Ogunwolu, Parametric multi-level modeling of Lean Six Sigma, 2024, 71, 1110-1903, 10.1186/s44147-024-00528-1 |