1.
Introduction
A nested optimization problem with two levels in a hierarchy, i.e the upper-level and lower-level decision-making, is known as the bilevel programming problem. Each level has distinct constraints and objective functions. Both of them have their objective functions and constraints. The decision-maker at the upper level always takes the lead, followed by those at the lower level. The objective function and constraint of the upper-level programming depend on their decision variables and the optimum solution of the lower-level programming. The decision maker at the lower level must maximize its objective function by using the variables provided by the decision maker at the upper level, who in turn chooses the variables after having full knowledge of the lower level's potential responses. Bilevel mathematical programming stresses the system's non-cooperative nature, in contrast to many objective mathematical programming methodologies. This hierarchical model has several applications, including resource allocation, decentralized control, and network design issues. The successful application of this hierarchical model depends on how well it is solved when handling realistic complications, etc. [3]. Bilevel mathematical programming has attracted a lot of attention, and many effective algorithms have been presented. There are currently several methods for solving NBLP problems and they can be divided into four categories: the Karush-Kuhn-Tucker condition approach [1,15,17,19,21], the penalty function approach [24,29], the descent approach [22,36], and the evolutionary approach [41].
In this paper, a class of NBLP problems is reduced to a traditional nonlinear programming problem with complementary constraints. The lower-level problem is then substituted by its Karush-Kuhn-Tucker optimality conditions. The CHKS smoothing function is then used to smooth it.
The smoothed nonlinear programming problem (SNLP) is solved by using the nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the nonlinear bilevel programming problem. In the nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method; for more details see [12,13,14,15,18,19]. To solve the equality- constrained optimization problem with bounded variables, Newton's interior-point approach [6] is used. Because Newton's interior-point method is a local method, it might not converge if the starting point is far from a stationary point. A trust-region strategy is used to treat this problem and ensure convergence to the stationary point from any starting point. A trust-region technique can induce strong global convergence and it is a very important method for solving unconstrained and constrained optimization problems; see [10,11,12,13,14,16,17,18,19]. One advantage of the trust-region technique is that it does not require the models objective function to be convex. However, in the traditional trust-region strategy, we must use some criteria to determine whether the trial step is acceptable after solving a trust-region subproblem. Otherwise, the trust-region radius needs to be reduced. A method for calculating the trust-region radius Δk at each iteration is an important part of trust-region techniques. The standard trust-region strategy is predicated on the objective function and the model agreement. The trust region's radius is updated by paying attention to the ratio tk=AredkPredk where Aredk refers to the actual reduction and Predk refers to the predicted reduction. It can be deduced that whenever tk is close to 1, there will be a good agreement between the model and the objective function over a current trust region. It is well-known that the standard trust-region radius Δk is independent of the gradient and Hessian of the objective function, so we are not able to know if the radius Δk is convenient for the whole implementation. This condition might lead to an increase in the number of subproblems that must be resolved in the method's inner phases which reduces the method's efficiency.
To overcome this problem many authors proposed various nonmonotone trust-region methods, for example, see [9,31,37,38,43,44]. Motivated by the nonmonotone trust-region strategy in [31], we use it in our method. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.
Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm to solve the NBLP problems was examined by using several benchmark problems and a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.
This paper is organized as follows: Section 2 introduces the mathematical formulation of the NBLP problem, basic definitions of the CHKS smoothing functions, and the smoothing method for the nonlinear complementarity problem to obtain the SNLP problem. Section 3 introduces the nonmonton active interior-point trust region algorithm for solving the SNLP problem. Results from simulations on several benchmark problems and a real-world case about a watershed trading decision-making problem are reported in Section 4. The conclusion is given in Section 5.
2.
Mathematical model for SNLP problem
In this paper, we will consider the following NBLP problem
where y∈ℜn1 and z∈ℜn2. The functions fu:ℜn1+n2→ℜ, fl:ℜn1+n2→ℜ, cu:ℜn1+n2→ℜm1, and cl:ℜn1+n2→ℜm2 are assumed to be at least twice continuously differentiable function in our method.
The NBLP problem (2.1) was reduced to the following one-objective optimization problem using Karush-Kuhn-Tucker optimality assumptions for the lower level problem:
where λ∈ℜm2 is a multiplier vector associated with the inequality constraint cl(y,z). Problem (2.2) with the nonlinear complementarity condition is non-convex and non-differentiable; moreover, the regularity assumptions required to handle smooth optimization problems are never satisfied and it is not good to use our approach to solve problem (2.2). Due to this, we use the CHKS smoothing function to overcome this problem, see [5,26,35].
Definition 2.1. The Chen-Mangasarian smoothing function is represented by the notation ˆϕ(g,h):ℜ2→ℜ and defined by the expression ˆϕ(g,h)=g+h−√(g−h)2. By introducing a smoothing parameter ˜ϵ∈R into the the smoothing function ˆϕ(g,h), we obtain the CHKS smoothing function
The Chen-Mangasarian smoothing function has the property ˆϕ(g,h)=0 if and only if g≥0, h≥0, gh=0 but it is non-differentiable at g=h=0. But, the CHKS smoothing function has the property ˆϕ(g,h,˜ϵ)=0 if and only if g≥0, h≥0, and gh=˜ϵ2 for ˜ϵ≥0, and the function is smoothing for g, h, and ˜ϵ≥0.
Consequently, by using the CHKS smoothing function (2.3), the problem (2.2) can be approximated as follows:
The above smoothed nonlinear programming problem can be summarized as follows
where x=(y,z,λ)T∈ℜn where n=n1+n2+m2, ce(x)=[∇zfl(y,z)+∇zcl(y,z)λ,λj−clj−√(λj+clj)2+4˜ϵ2], j=1,...,m2. The functions fu(x):ℜn→ℜ, cu(x):ℜn→ℜm1, and ce(x):ℜn→ℜn2+m2 are twice continuously differentiable and m1<n. We denote the feasible set E={x:βa≤x≤βb} and the strict interior feasible set int(E)={x:βa<x<βb} where βa∈{ℜ⋃{−∞}}n, βb∈{ℜ⋃{∞}}n, and βa<βb.
Several methods that have been suggested to solve the smoothed nonlinear programming problem (2.5), see [11,12,16,17,19]. The nonmonton active interior-point trust-region algorithm is proposed in this paper to solve problem (2.5) and a detailed description of this algorithm is clarified in the following section.
3.
Nonmontone active-set interior-point trust-region algorithm
In this section, firstly, we will offer a detailed description of the active-set strategy with the penalty method to convert problem (2.5) to an equality-constrained optimization problem with bounded variables. Second, the basic steps for using Newton's interior-point method to solve the equality-constrained optimization problem are presented clearly. Thirdly, the main steps for the nonmontone trust-region algorithm are presented. Finally, the main steps for solving problem 2.1 are introduced.
3.1. An active-set strategy
Motivated by the active-set strategy proposed in [8] and used in [10,11,12,13,14], we define a 0-1 diagonal matrix D(x)∈ℜm1×m1, whose diagonal entries are defined as follows:
Problem (2.5) is transformed into the following equality-constrained optimization problem using the previous matrix.
The previous problem is transformed into the following problem by using a penalty method
where σu is a positive parameter.
3.2. Newton's interior-point method
Motivated by the interior point method in [6], we let L(x,μe,λa,λb) be a Lagrangian function associated with problem (3.2) and it is defined as follows
where
and
such that μe, λa, and λb represent the Lagrange multiplier vectors associated with the equality constraint ce(x) and inequality constraints (x−βa) and (βb−x) respectively. A point x∗∈E will be a local minimizer of problem (3.2) if there exists multiplier vectors μe∗∈ℜm1, λa∗∈ℜn+, and λb∗∈ℜn+ such that (x∗,μe∗,λa∗,λb∗) satisfies the following Karush-Kuhn-Tucker conditions,
where
and
Let V(x) be a diagonal matrix whose diagonal elements are as follows:
For more details see [11,12,20]. Using the scaling matrix V(x), conditions (3.6)–(3.9) are equivalent to the following nonlinear system,
For the following reasons, the nonlinear systems (3.13) and (3.14) is continuous but not everywhere differentiable.
● It may be non-differentiable when v(j)=0. To overcome this problem, restricting x∈int(E).
● It may be non-differentiable when v(j) has an infinite upper bound and a finite lower bound, and (∇xℓ(x∗,μe∗;σu∗))(j)=0. To overcome this problem, we define a vector θ(x) whose components θ(j)(x)=∂((v(j))2)∂x(j), j=1,...,n are defined as follows
If we use Newton's method to solve the nonlinear systems (3.13) and (3.14), we get
where
and H is the Hessian of the Lagrangian function (3.4) or an approximation to it.
Restricting x∈int(E) is necessary to ensure that the matrix V(x) is nonsingular. Therefore, putting Δx=V(x)s in both Eqs (3.16) and (3.17) and multiplying both sides of equation (3.16) by V−1(x), we get
It should be noted that the step sk produced by the systems (3.19) and (3.20) is equivalent to the step produced by resolving the following quadratic programming subproblem,
where
and
While Newton's approach has the advantage of being quadratically convergent under reasonable assumptions, it also has the disadvantage of requiring that the starting point be close to the solution. The nonmonotone trust-region globalization approach is used to ensure convergence from any starting point. The nonmonotone trust-region globalization strategy is a crucial method for solving a smooth nonlinear unconstrained or constrained optimization problem that can produce substantial global convergence. In the following section, we present the main steps of the nonmontone trust-region algorithm to solve quadratic subproblem (3.21).
The main steps of the process to apply the nonmontone trust-region algorithm to solve the quadratic subproblem (3.21) are described in the section that follows.
3.3. The nonmontone trust region algorithm
The trust-region subproblem associated with the problem (3.21) is given by
where δk is the radius of the trust-region.
Due to the possibility of no intersecting points existing between the hyperplane of the linearized constraints ce(x)+V(x)∇ce(x)Ts=0 and the constraint ‖s‖≤δk, subproblem (3.26) may be infeasible. There is no guarantee that this will remain true even if they do intersect if δk is reduced; see [7]. To solve this problem, we used a reduced Hessian strategy. This strategy was suggested in [2,32] and subsequently implemented in [12,13,16,17,18,20]. This strategy divides the step sk into two orthogonal components: the normal component snk for improving feasibility and the tangential component stk for improving optimality. That is, sk=snk+stk and stk=˜Zkˉstk where ˜Zk is a matrix whose columns form a basis for the null space of (Vk∇cek)T.
● To compute snk
Obtaining the normal component snk by solving the following trust-region subproblem
for some ζ∈(0,1).
A conjugate gradient method [34] which is very cheap if the problem is large-scale and the Hessian is indefinite, is used to compute the normal component snk. The main steps involved in applying the conjugate gradient method to solve subproblem (3.25) are presented in the following algorithm.
Algorithm 3.1. : (A conjugate gradient method to calculate snk)
Step 1. Set sn0=0, rn0=−Vk∇cekcek and pn0=rn0; pick ϵ>0.
Step 2. For i = 0, 1, .... do
Compute γni=rniTrnipniTVk∇cek∇cTekVkpni.
Compute τni such that ‖sni+τnpni‖=δk.
If γni≤0, or if γni>τni, then set snk=sni+τnipni and stop.
Otherwise set sni+1=sni+γnipni and
rni+1=rni−γniVk∇cek∇cTekVkpni.
If ‖rni+1‖‖rn0‖≤ϵ0,
set snk=sni+1 and stop.
Compute ˆβi=rni+1Trni+1rniTrni, and the new direction pni+1=rni+1+ˆβipni.
Given snk, let q(Vksk) be the quadratic form of the function (3.5) and let it be defined as follows
● To compute stk
To compute the tangential component stk=˜Zkˉstk, solve the following trust-region subproblem
where ∇qk(Vksnk)=Vk∇xℓ(xk,μek;σuk)+Bksnk and Δk=√δ2k−‖snk‖2. The main steps involved in applying the conjugate gradient method to solve subproblem (3.27) are presented in the following algorithm.
Algorithm 3.2. (A conjugate gradient method to compute stk)
Step 1. Set ˉst0=snk, rt0=−˜ZTk∇qk(Vksnk)+Bksnk, pt0=rt0.
Step 2. For i = 0, 1, .... do
Compute γti=rTtirtipTti˜ZTkBk˜Zkpti.
Compute τti such that ‖ˉsti+τtpti‖=Δk.
If γti≤0, or if γti>τti, then set ˉstk=ˉsti+τtipti and stop.
Otherwise set ˉsti+1=ˉsti+γtipti and
rti+1=rti−γti˜ZTkBk˜Zkpti.
If ‖rti+1‖‖rt0‖≤ϵ,
set ˉstk=ˉsti+1 and stop.
Compute ˉβi=rTti+1rti+1rTtirti, and pti+1=rti+1+ˉβipti.
After computing sk, we set xk+1=xk+Vksk. To ensure that the matrix Vk is nonsingular, we need to guarantee that xk+1∈intE. So, the damping parameter φk is required at every iteration k.
● To obtain the damping parameter φk
The following algorithm clarifies the fundamental steps required to obtain the damping parameter φk.
Algorithm 3.3. (Damping parameter φk)
Step 1. Compute the parameter ωk as follows
Step 2. Compute the parameter υk as follows
Step 3. Compute the damping parameter φk as follows
Step 4. Set xk+1=xk+φkVksk.
To determine whether the scaled step φkVksk will be accepted or no, we need a merit function that connects the objective function and the constraints such that progress in the merit function equates to progress in solving the problem. The augmented Lagrangian function that follows is used as a merit function,
where σe is the penalty parameter.
To test the scaled step, we use the following scheme to determine the Lagrange multiplier vector μek+1,
The following actual reduction and the predicted reduction must be defined to determine if the point (xk+1,μek+1) will be accepted in the next iteration or no. The actual reduction in the merit function in moving from (xk,μek) to (xk+φkVksk,μek+1) is defined as
The predicted reduction Predk in the merit function (3.29) is defined as follows
where
To ensure that Predk≥0, the penalty parameter σek must be updated.
● To update the penalty parameter σek
To ensure that Predk≥0, we need to update the penalty parameter σek by using the following algorithm.
Algorithm 3.4. (Process to update σek)
If
set
where ˜b0>0 is a small fixed constant.
Else, set σek+1=σek.
End if.
For more details, see [12].
● To test φkVksk and update δk
Motivated by the nonmonotone trust-region strategy in [31], we define
where
and
such that 0≤ξmin≤ξk−1≤ξmax≤1 and
The procedure below introduces the trial step φkVksk for testing and updates the radius δk.
Algorithm 3.5. (Test φkVksk and update δk)
Step 0. Choose 0<α1<α2≤1, δmax>δmin, and 0<ζ1<1<ζ2.
Step 1. (Compute tk)
Compute ξk by using (3.38).
Compute Qk as defined in (3.37).
Compute Ck as defined in (3.36).
Compute tk=Ck−Φ(xk+φkVksk,μek+1;σuk;σek)Predk.
Step 2. (Update the trial step sk)
While tk<α1, or Predk≤0,
set δk=ζ1‖sk‖.
To evaluate a new step sk, go to Algorithms (3.1) and (3.2).
Step 3. (Update δ)
If α1≤tk<α2, then
set xk+1=xk+φkVksk.
δk+1=max(δmin,δk).
End if.
If tk≥α2, then
set xk+1=xk+φkVksk.
δk+1=min{max{δmin,ζ2δk},δmax}.
End if.
● To update the parameter σuk
To update σuk, we use a scheme proposed in [39]. Let a tangential predicted decrease Tpred which is obtained by the tangential component stk be given by
the main steps used to update the parameter σuk is clarified by the following algorithm.
Algorithm 3.6. (Updating σuk)
Step 1. Set σu0=1.
Step 2. If
then σuk+1=σuk,
else, σuk+1=2σuk.
End if
Finally, the nonmontone trust-region algorithm is terminated when either
or ‖sk‖≤ε2, for some ε1,ε2>0.
● Nonmonotone trust-region algorithm
In the algorithm that follows, we will outline the main steps of the nonmonotone trust-region algorithm in order to solve subproblem (3.21).
Algorithm 3.7. (The nonmonotone trust-region algorithm)
Step 0. Start with x0∈intE. Compute D0, V0, μe0, and θ0. Set σu0=1, σe0=1, and ˜b0=0.1.
Choose ε1>0 and ε2>0. Choose δmin, δmax, and δ0 such that δmin≤δ0≤δmax.
Choose α1, α2, ζ1, and ζ2 such that 0<ζ1<1<ζ2, and 0<α1<α2<1. Set k=0.
Step 1. (Termination)
If ‖˜ZTkVk∇xℓ(xk,μek)‖+‖Vk∇cukDkcuk‖+‖cek‖≤ε1, then stop the algorithm.
Step 2. (Computing step Vkφksk)
Evaluate the normal component step snk by using Algorithm (3.1).
Evaluate the tangential step ˉstk by using Algorithm (3.2).
Set sk=snk+˜Zkˉstk.
If ‖sk‖≤ε2, then stop the algorithm.
Else, compute the parameter φk by using (3.28).
Set xk+1=xk+Vkφksk.
End if.
Step 3. Evaluate Dk+1 which is defined by (3.1).
Step 4. (Updating the multiplier vector μek+1)
Computing the Lagrange multiplier vector μek+1 by using (3.30).
Step 5. (Updating the penalty parameter σe)
Using Algorithm (3.4) to update the penalty parameter σek.
Step 6. Compute tk as defined in (3.35) by using both Qk as defined in (3.37) and Ck as defined in (3.36).
Using Algorithm (3.5) to test the scaling step φkVksk and update the radius of the trust-region.
Step 7. Updating the parameter σuk using Algorithm (3.6).
Step 8. Utilize (3.12) to evaluate the matrix Vk+1.
Step 9. Set k=k+1 and go to Step 1.
The global convergence theory for Algorithm (3.7) is similar to the global convergence theory of the algorithm presented in ([12]) when used to solve a general nonlinear programming problem.
3.4. CHKS nonmontone active-set interior-point trust-region algorithm
We will outline the main steps of the nonmontone active-set interior-point trust-region algorithm based on the CHKS smoothing function to resolve problem (2.1) in the following algorithm.
Algorithm 3.8. (CHKS nonmontone active-set interior-point trust-region algorithm)
Step 1. Reduce the nonlinear bilevel programming problem (2.1) to the one-objective optimization problem (2.2) using Karush-Kuhn-Tucker optimality assumptions for the lower level problem.
Step 2. Use the CHKS smoothing function to convert non-deferential problem (2.2) to smoothing problem (2.4).
Step 3. Utilize an active-set strategy to convert the nonlinearly constrained optimization problem (2.4) to the equality constrained optimization problem with bounded variables (3.2).
Step 4. Utilize an interior-point method with the diagonal scaling matrix V(x) given in (3.12) to obtain the nonlinear system [(3.13) - (3.14)].
Step 5. Utilize Newton's method with diagonal matrix θ as defined in (3.15), to solve the nonlinear system [(3.13)-(3.14)] and obtain the equivalent subproblem (3.21).
Step 6. Solve subproblem (3.21) by using nonmonton trust-region given by Algorithm (3.7).
4.
Computational tests and comparisons
In this section, we introduce an extensive variety of possible numeric NBLP problems to illustrate the validity of the proposed Algorithm (3.8) to solve problem (2.1).
4.1. Numerical examples
We shall introduced the MATLAB numerical results for Algorithm (3.8) with a starting point x0∈int(E). The parameter setting that follows was utilized: α1=10−2, α2=0.75, ζ1=0.5, ζ2=2, δmin=10−3, δ0=max(‖scp0‖,δmin), δmax=103δ0, ε1=10−10, and ε2=10−12.
To demonstrate the efficacy of the suggested Algorithm (3.8) in obtaining the solution to NBLP problem (2.1), we offer a wide variety of possible numeric NBLP problems in this section. Fifteen benchmark instances from [4,22,27,30,33] were used to test the proposed Algorithm (3.8).
To check the consistency of the results, 10 independent runs using different initial starting points were carried out for each test example. Table 1 summarizes the statistical data for all examples and demonstrates that the proposed Algorithm (3.8) results are approximate or equal to those of the compared algorithms in the method proposed in [18] and the literature.
For purposes of comparison, in Table 2 we have provided the corresponding results of the average number of iterations (iter), the average number of function evaluations (nfunc), and the average amount of CPU time (CPUs) in seconds as obtained via the method in [18]. The results in Table 2 demonstrate that the proposed Algorithm (3.8) results are approximative or the best of those obtained via the comparative algorithms in the literature. Results show that our proposed method is capable of handling the NBLP problem (2.1) whether the upper or lower levels are convex or not, and the calculated results converge to the optimal solution that is approximate or equal to the optimal solution stated in the literature. Finally, it is evident from a comparison of the solutions obtained via the proposed Algorithm (3.8) and those found in the literature that the proposed Algorithm (3.8) is capable of finding the best solution to some problems under conditions of a small amount of time, fewer function evaluations, and fewer iterations.
Test Problem 1 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 1 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,z∗1,z∗2)=(0.8438,0.7657,0), fu=−2.0769, and fl=−0.5863.
Test Problem 2 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 2 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2,z∗3)=(0.6111,0.3890,0,0,1.8339), fu=0.64013, and fl=1.6816.
Test Problem 3 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 3 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2)=(0.97,3.14,2.6,1.8), fu=−8.92, and fl=−6.05.
Test Problem 4 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 4 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2)=(0.5,0.5,0.5,0.5), fu=−1, and fl=0.
Test Problem 5 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 5 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗,z∗)=(9.9998,9.9998), fu=99.9996, and fl=3.7160e−07.
Test Problem 6 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 6 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,z∗1,z∗2)=(1.8884,0.89041,0), fu=−1.2067, and fl=7.6062.
Test Problem 7 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 7 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗,z∗)=(1,0), fu=17, and fl=1.
Test Problem 8 [28]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 8 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2)=(0.75;0.75;0.75;0.75), fu=−2.25, and fl=0.
Test Problem 9 [23]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 9 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗,z∗)=(11.25,5), fu=2250, and fl=197.753.
Test Problem 10 [23]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 10 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,z∗1,z∗2)=(1,0,0), fu=0, and fl=0.
Test Problem 11 [40]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 11 is converted to the following smooth nonlinear programming problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2)=(25,30,4.9996,10), fu=5.0024, and fl=1.5000e−06.
Test Problem 12 [23]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 12 is converted to the following SNLP problem
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗,z∗)=(3,5), fu=9, and fl=0.
Test Problem 13 [40]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 13 is converted to the following SNLP problem.
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗1,y∗2,z∗1,z∗2)=(0,2,1.875,0.90625), fu=−18.679, and fl=−1.0156.
Test Problem 14 [40]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 14 is converted to the following SNLP problem.
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y∗,z∗)=(10.016,0.81967), fu=81.328, and fl=−0.33593.
Test Problem 15 [40]:
Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 15 is converted to the following SNLP problem,
Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y_1^*, y_2^*, z_1^*, z_2^*, z_3^*) = (0, 0.9, 0, 0.6, 0.4) , f_u = -29.2 , and f_l = 3.2 .
4.2. Practical example
In this section, the efficacy of the proposed Algorithm (3.8) was tested by using a watershed water trading decision-making model based on bilevel programming [42]. The upper decision-maker is the watershed management agency which acts as the planning, controlling and coordinating center of a watershed, and each user is the lower decision-maker. The mathematical formulation for the watershed water trading decision-making model is formulated as follows:
where q_1 and q_2 are the actual water intake volumes of water consumer A and water consumer B respectively. l_1 and l_2 are the wastewater discharge volumes of two users respectively. r_1 and r_2 are the water rights of the two users respectively. g_1 and g_2 are the emission rights of two users respectively. w is the ecological water requirement of the watershed. t represents the water resource fees. Applying the Karush-Kuhn-Tucker condition to the lower-level problem and using the CHKS smoothing function, the watershed water trading decision-making model is converted to the following SNLP problem,
Applying Algorithm (3.7) to the above nonlinear programming problem we have the following data at \tilde{\epsilon} = 0.001 . g_1 = 9 , g_2 = 10.450 , l_1 = 6.8755 , l_2 = 9.0751 , q_1 = 41.497 , q_2 = 43 , r_1 = 39.332 , r_2 = 44.269 , t = 0.3 , w = 6.2338 , f_1 = 12.17 , f_2 = 19.043 , and F = 59.055 .
Table 3 compares the obtained results with method [25] and those of various existing methods(Ref). From Table 3, we can see that the solutions obtained via Algorithm (3.8) are better than those given in [25] and those of various existing methods(Ref). Therefore, Algorithm (3.8) can also be applied to the practical problem.
5.
Conclusions
In this paper, Algorithm (3.8) has been presented to solve the NBLP problem (2.1). A Karush-Kuhn-Tucker condition is used with the CHKS smoothing function to convert the NBLP problem (2.1) into a standard SNLP problem. The SNLP problem is solved by using the proposed nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the NBLP problem. In the proposed nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method. To solve the equality-constrained optimization problem with bounded variables, Newton's interior-point method is used. But it is a local method so it might not converge if the starting point is far from a stationary point. To overcome this problem, the nonmonotone trust-region strategy is used. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.
Applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach. Numerical results reflect the good behavior of the proposed technique and the computed results converge to the optimal solutions. It is clear from the comparison between the solutions obtained by using Algorithm (3.8) and the algorithm of [18], that Algorithm (3.8) is able to find the optimal solution of some problems with fewer iterations, fewer function evaluations, and less time.
Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm in efforts to solve NBLP problems was examined by using a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.
Use of AI tools declaration
The authors declare that they have not used artificial intelligence tools in the creation of this article.
Conflict of interest
The authors declare that they have no competing interests.