Marine Predators algorithm (MPA) is a newly proposed nature-inspired metaheuristic algorithm. The main inspiration of this algorithm is based on the extensive foraging strategies of marine organisms, namely Lévy movement and Brownian movement, both of which are based on random strategies. In this paper, we combine the marine predator algorithm with Teaching-learning-based optimization algorithm, and propose a hybrid algorithm called Teaching-learning-based Marine Predator algorithm (TLMPA). Teaching-learning-based optimization (TLBO) algorithm consists of two phases: the teacher phase and the learner phase. Combining these two phases with the original MPA enables the predators to obtain prey information for foraging by learning from teachers and interactive learning, thus greatly increasing the encounter rate between predators and prey. In addition, effective mutation and crossover strategies were added to increase the diversity of predators and effectively avoid premature convergence. For performance evaluation TLMPA algorithm, it has been applied to IEEE CEC-2017 benchmark functions and four engineering design problems. The experimental results show that among the proposed TLMPA algorithm has the best comprehensive performance and has more outstanding performance than other the state-of-the-art metaheuristic algorithms in terms of the performance measures.
Ziqi Zhu, Kaiye Zheng, Shenghua Wang .
A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975.
doi: 10.3934/math.20241020
[2]
Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang .
Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720.
doi: 10.3934/math.2024475
[3]
Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane .
Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119.
doi: 10.3934/math.20231539
[4]
Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin .
Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842.
doi: 10.3934/math.2024672
[5]
Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet .
Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560.
doi: 10.3934/math.2021093
[6]
Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi .
A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638.
doi: 10.3934/math.2021332
[7]
Fei Ma, Jun Yang, Min Yin .
A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028.
doi: 10.3934/math.2022279
[8]
Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang .
Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727.
doi: 10.3934/math.2021622
[9]
Saud Fahad Aldosary, Mohammad Farid .
A viscosity-based iterative method for solving split generalized equilibrium and fixed point problems of strict pseudo-contractions. AIMS Mathematics, 2025, 10(4): 8753-8776.
doi: 10.3934/math.2025401
[10]
Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri .
Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573.
doi: 10.3934/math.2024903
Abstract
Marine Predators algorithm (MPA) is a newly proposed nature-inspired metaheuristic algorithm. The main inspiration of this algorithm is based on the extensive foraging strategies of marine organisms, namely Lévy movement and Brownian movement, both of which are based on random strategies. In this paper, we combine the marine predator algorithm with Teaching-learning-based optimization algorithm, and propose a hybrid algorithm called Teaching-learning-based Marine Predator algorithm (TLMPA). Teaching-learning-based optimization (TLBO) algorithm consists of two phases: the teacher phase and the learner phase. Combining these two phases with the original MPA enables the predators to obtain prey information for foraging by learning from teachers and interactive learning, thus greatly increasing the encounter rate between predators and prey. In addition, effective mutation and crossover strategies were added to increase the diversity of predators and effectively avoid premature convergence. For performance evaluation TLMPA algorithm, it has been applied to IEEE CEC-2017 benchmark functions and four engineering design problems. The experimental results show that among the proposed TLMPA algorithm has the best comprehensive performance and has more outstanding performance than other the state-of-the-art metaheuristic algorithms in terms of the performance measures.
1.
Introduction
In this paper, let H denote a real Hilbert space with inner product ⟨⋅,⋅⟩ and norm ‖⋅‖. Let M, R, and N stand for the nonempty closed convex subset of H, set of real numbers and set of positive integers, respectively. Let G:H→H be a mapping. The variational inequality problem (VIP) is concerned with the problem of finding a point u⋆∈M such that
⟨Gu⋆,u−u⋆⟩≥0,∀u∈M.
(1.1)
We denote the solution set of VIP (1.1) by VI(M,G). The VIP, which Fichera [12] and Stampacchia [38] independently examined, is a crucial tool in both the applied and pure sciences. It has attracted the attention of many authors in recent years due to its wide range of applications to issues arising from partial differential equations, optimal control problems, saddle point problems, minimization problems, economics, engineering, and mathematical programming.
On the other hand, an element u∈M is said to be the fixed point of a mapping S:M→M, if Su=u. The set of all the fixed points of S is denoted by F(S)={u∈M:Su=u}. The study of the fixed point theory of nonexpansive mappings has been applied in several fields such as game theory, differential equations, signal processing, integral equations, convex optimization, and control theory [19]. There are several recent results in the literature on approximation of fixed points of nonexpansive mappings (see, for example, [8,9,26,27,28,29,34,35,36] and the references therein).
It is well-known that the VIP (1.1) can be reformulated as a fixed point problem as follows:
u⋆=PM(I−ηG)u⋆,
(1.2)
where PM:H→M is the metric projection and η>0. The extragradient method is a prominent method that has been used by many authors over the years to solve VIP. This method was first introduced by Korpelevich [21] in 1976. Given an initial point u0∈M, the sequence {um} generated by the extragradient method is as follows:
{vm=PM(I−ηG)um,um+1=PM(um−ηGvm),∀m≥0,
(1.3)
where η∈(0,1L), and G is an operator that is L-Lipschitz continuous and monotone. For VI(M,G)≠∅, the author showed that the sequence {um} defined by (1.3) converges weakly to an element in VI(M,G).
The extragradient method's main flaw is its iterative requirement to compute two projections on the feasible set M. In fact, if M has a complex structure, this might have an impact on how efficiently the method computes. In recent years, several authors have paid a great deal of attention to overcoming this restriction (see, for example [6,7,11,16,48]). In order to address the drawback of the extragragient method, in 1997, He [16] introduced a method that requires only a single projection per each iteration. This method is known as the projection and contraction method and it is given as follows:
The author showed that the sequence {um} generated by (1.4) converges weakly to a unique solution of VIP (1.1). The subgradient extragradient method, which was developed by Censor et al. [6,7,11], is another effective strategy for addressing the limitation of the extragradient method and it is defined as follows:
where η∈(0,1L), and G is a L-Lipschitz continuous and monotone operator. The main idea in this method is that a projection onto a special contractible half-space is used to replace the second projection onto M of the extragradient method, and this significantly reduces the difficulty of calculation. The authors showed that if VI(M,G)≠∅, the sequence {um} defined by (1.5) weakly converges to a point in VI(M,G).
Furthermore, the notion of the inertial extrapolation technique is based upon a discrete analogue of a second order dissipative dynamical system and it is known as an acceleration process of iterative methods. It was first developed in [37] to solve smooth convex minimization problems. For some years now, the inertial techniques have been widely adopted by many authors to improve the convergence rate of various iterative algorithms for solving several kinds of optimization problems (see, for example, [1,17,30,31,32,41,44,45,46,55]).
It is worthy to note that the study of the problem involving the approximation of the common solution of the fixed point problem (FPP) and VIP plays a significant role in mathematical models whose constraints can be expressed as FPP and VIP. This happens in real-world applications such as image recovery, signal processing, network resource allocation, and composite site reduction (see, for example, [2,14,18,22,24,25,33,51] and the references therein).
Very recently, Thong and Hieu [43] introduced two modified subgradient extragradient methods with line search process for solving the VIP with L-Lipschitz continuous and monotone operator G and FPP involving quasi-nonexpansive mapping S, such that I−S is demiclosed at zero. Under appropriate assumptions, the authors showed that the sequences generated by their algorithms weakly converge some points in F(S)∩VI(M,G).
We note that Thong and Hieu [43] only proved weak convergence results for their algorithms. According to Bauschke and Combettes [3], for the solution of optimization problems, the strong convergence of iterative methods are more desirable than their weak convergence counterparts. Furthermore, we observe that Thong and Hieu [43] employed the Armijo-type line search rule step size to their algorithms in order to enable them to operate without requiring prior knowledge of the Lipschitz constant of the operators. However, the use of Armijo-type step sizes may cause the considered methods to perform multiple calculations of the projection values per iteration on the feasible set. To overcome this limitation, Liu and Yang [23] developed an adaptive step size criterion, which only needs the use of some previously given information to complete the step size calculation.
As far as we know, there is no result in the literature involving the subgradient extragradient method with double inertial extrapolations for finding the common solution of VIP and FPP in real Hilbert spaces. Due to the importance of common solutions of VIP and FPP to some real-world problems, it is natural to ask the following question:
Is it possible to construct a double inertial subgradient extragradient-type algorithms with a new step size for finding the common solution of VIP and FPP?
One of the purposes of this article is to give an affirmative answer to the above question. Motivated by the ongoing research in these directions, we propose some modified subgradient extragradient methods with a new step size. These proposed methods are derived from the combinations of the original subgradient extragradient method, viscosity method, projection and contraction method. We prove that our new methods converge strongly to the common solutions of VIP involving pseudo-monotone mappings and FPP involving quasi-nonexpansive mappings that are demiclosed at zero in real Hilbert spaces. The following are more contributions made in this research:
● Our algorithms do not need any Armijo-type line search techniques. Rather, they use a new self-adaptive step size technique, which generates a non-monotonic sequence of step sizes. This step size is formulated such that it reduces the dependence of the algorithms on the initial step size. Conducted numerical experiments proved that the proposed step size is more efficient and ensures that our methods require less computation time than many methods in the literature that work with Armijo-type line search technique.
● Our step size properly includes those in [23,41,50].
● Our algorithms are constructed to approximate the common solution of VIP involving pseudo-monotone mappings and FPP involving quasi-nonexpansive mappings. Since the class of Pseudo-monotone mappings is more general than the class of monotone mappings, it means that our results improve and generalize several results in the literature for finding common solution VIP involving monotone mappings and quasi-nonexpansive mappings. Hence, our results are improvements of the results in [22,43,47] and several others.
● Our algorithms are embedded with double inertial terms to accelerate their convergence speed. Numerical tests showed that the proposed algorithms converge faster than the compared existing methods with single inertial term.
● We prove our strong convergence result under mild conditions imposed on the parameters. Our results are improvements on the weak convergence results in [43,47].
● To show the computational advantage of the suggested methods over some well-known methods in the literature, several numerical experiments are provided.
● We utilize our methods to solve some real-world problems, such as optimal control and signal processing problems.
● The proofs of our strong convergence results do not require the conventional "two cases" approach that have been employed by several authors in the literature to establish strong convergence results; see, for example, [5,30].
The article is organized as follows: In Section 2, some useful definitions and lemmas are recalled. The proposed algorithms and their convergence results are presented in Section 3. In Section 4, we conduct some numerical experiments to show the efficiency of our proposed algorithms over several well known methods. In Section 5, we consider the application of our algorithms to the solution of optimal control problem. In Section 6, we apply our methods to image recovery problem and in Section 7, we give summary of the basic contributions in this work.
2.
Preliminaries
In what follows, we denote the weak convergence of the sequence {um} to u by um⇀u as m→∞ and the strong convergence of the sequences {um} is denoted by um→u as m→∞.
Next, the following definitions and lemmas will be recalled. Let G:H→H be an operator, then G is called:
(a1) contraction if there exists a constant k∈[0,1) such that
‖Gu−Gv‖≤k‖u−v‖,∀u,v∈H;
(a2)L-Lipschitz continuous, if L>0 exists with
‖Gu−Gv‖≤L‖u−v‖,∀u,v∈H.
If L=1, then G becomes a nonexpansive mapping;
(a3) Quasi-nonexpansive, if F(G)≠∅ such that
‖Gu−u⋆‖≤‖u−u⋆‖,∀u∈H,u⋆∈F(G);
(a4)α-strongly monotone, if there exists a constant α>0 such that
⟨Gu−Gv,u−v⟩≥α‖u−v‖2,∀u,v∈H;
(a5) Monotone, if
⟨Gu−Gv,u−v⟩≥0,∀u,v∈H;
(a6) Pseudo-monotone, if
⟨Gu,u−v⟩≥0⟹⟨Gu,u−v⟩≥0,∀u,v∈H;
(a7) Sequentially weakly continuous, if for any sequence {um} which converges weakly to u, then the sequence {Gum} weakly converges to Gu.
Lemma 2.1.[15] Let H be a real Hilbert space and M a nonempty closed convex subset of H. Suppose u∈H and v∈M, then v=PMu⟺⟨u−v,v−w⟩≥0, ∀w∈M.
Lemma 2.2.[15] Let M be a closed convex subset of a real Hilbert space H. If u∈H, then
(i) ‖PMu−PMv‖2≤⟨PMu−PMv,u−v⟩,∀v∈H;
(ii) ⟨(I−PM)u−(I−PM)v,u−v⟩≥‖(I−PM)u−(I−PM)v‖2,∀v∈H;
(iii)‖PMu−v‖2≤‖u−v‖2−‖u−PMu‖2,∀v∈H.
Lemma 2.3.For each u,v,w∈H and where α,β,δ∈[0,1] with α+β+δ=1, the followings hold in Hilbert spaces:
Lemma 2.4.[15] Let G:H→H be a nonlinear operator such that F(G)≠∅. Then I−G is called demiclosed at zero if for any um∈H, the following implication holds:
um⇀uand(I−G)um→0⟹u∈F(G).
Lemma 2.5.[52] Let {am} be a sequence of nonnegative real numbers such that
am+1≤(1−νm)am+νmbm,∀m≥1,
where {νm}⊂(0,1) with ∑∞m=0νm=∞. If lim supk→∞bmk≤0 for every subsequence {amk} of {am}, the following inequality holds:
lim infk→∞(amk+1−amk)≥0.
Then limm→∞am=0.
3.
Main results
In this section, we introduce three new double inertial subgradient extragradient algorithm-types for solving VIP and FPP. In order to establish our main results, we assume that the following conditions are fulfilled:
(C1) The feasible set M is nonempty, closed and convex.
(C2) The mapping G:H→H is pseudo-monotone and L-Lipschitz continuous.
(C3) The solution set F(S)∩VI(M,G)≠∅.
(C4) The mapping G is sequentially weak continuous on M.
(C5) The mappings K,J:H→H are non-expansive.
(C6) The mapping S:H→H is quasi-nonexpansive such that I−S is demiclosed at zero.
(C7) The mapping f:H→H is a contraction with constant k∈[0,1).
(C8) Let {αm}⊂(0,1), {βm}, {γm}⊂[a,b]⊂(0,1) such that αm+βm+γm=1, limm→∞αm=0, ∞∑m=αm=∞ and limm→∞ϵmαm=0=limm→∞ξmαm, where {ϵm} and {ξm} are positive real sequences.
(C9) Let {pm},{qm}⊂[0,∞) and {hm}⊂[1,∞) such that ∞∑m=0pm<∞, limm→∞qm=0, and limm→∞hm=1.
Remark 3.1.We note the following in Algorithm 3.1:
Algorithm 3.1.
Initialization:Choose η1>0,ϕ>0,θ>0,ρ∈(0,2),μ∈(0,1) and let g0,g1∈H be arbitrary.
Iterative Steps:Given the iterates um−1 and {um}(m≥1), calculate um+1 as follows:
Step 1: Choose ϕm and θm such that ϕm∈[0,ˉϕm] and θm∈[0,ˉθm], where
ˉϕm={min{m−1m+ϕ−1,ϵm‖um−um−1‖}, if um≠um−1,m−1m+ϕ−1,otherwise. (3.1)
ˉθm={min{m−1m+θ−1,ξm‖um−um−1‖}, if um≠um−1,m−1m+θ−1,otherwise. (3.2)
Step 2: Set
sm=um+ϕm(Kum−Kum−1), (3.3)
rm=um+θm(Jum−Jum−1), (3.4)
and compute
wm=PM(sm−ηmGsm). (3.5)
If sm=wm or Gsm=0, stop; sm is a solution of the VIP. Otherwise, do Step 3.
Step 3: Compute
zm=PTm(sm−ρηmδmGwm), (3.6)
where
Tm={u∈H:⟨sm−ηmGsm−wm,u−wm⟩≤0}, (3.7)
δm={⟨sm−wm,vm⟩‖vm‖2, if vm≠0,0,otherwise, (3.8)
and
vm=sm−wm−ηm(Gsm−Gwm). (3.9)
Step 4: Compute
um+1=αmf(rm)+βmzm+γmSzm. (3.10)
Update
ηm+1={min{(qm+hmμ)‖sm−wm‖‖Gsm−Gwm‖,ηm+pm}, if Gsm≠Gwm,ηm+pm,otherwise. (3.11)
Set m:=m+1 and go back to Step 1.
(i) It is not hard to see from (3.1), (3.2), and condition (C8) that
limm→∞ϕm‖um−um−1‖=limm→∞θm‖um−um−1‖=0
and
limm→∞ϕmαm‖um−um−1‖=limm→∞θmαm‖um−um−1‖=0.
(ii) In order to get larger step sizes, we introduce the sequence {qm} and {hm} in (3.11) to relax the the parameter μ. The relaxation parameters can often improve the numerical performances of algorithms, see [10]. If qm=0 in (3.11), then {ηm} becomes the step size in [41]. If hm=1 in (3.11), then {ηm} becomes that in [50]. If qm=0 and hm=1 in (3.11), then the step size {ηm} reduces to that in [23]. Lastly, if qm=pm=0 and hm=1, {ηm} reduces to the step sizes used by many authors in the literature (see, for example, [13,42,53,54]).
We now establish the following lemmas that will be useful in proving our strong convergence theorems.
Lemma 3.1.If conditions (C3) and (C4) are fulfilled and {ηm} is the sequence generated by (3.11). Then, {ηm} is well-defined and limm→∞ηm=η∈[min{μL,η1},η1+∞∑m=1pm].
Proof. Since G is Lipschitz continuous with L>0, qm≥0 and hm≥1, by (3.11), if Gsm≠Gwm, we have
ηm≥(qm+hmμ)‖sm−wm‖‖Gsm−Gwm‖≥qm+hmμL≥μL.
We omit the remaining part of the proof to avoid repetitive expressions of the proof of Lemma 3.1 in [50]. □
Lemma 3.2.Let {sm} and {wm} be two sequences generated by Algorithm 3.1. Suppose that conditions (C1)–(C4) are fulfilled and if a subsequence {smk} of {sm} exists, such that smk⇀v⋆∈H and limk→∞‖smk−wmk‖=0, then v⋆∈VI(M,G).
Proof. Since wmk=PM(smk−ηmkGsmk), then by applying Lemma 2.1, we have
Since smk⇀v⋆, we know that {smk} is bounded and G is L-Lipschitz continuous on H, this means that {Gsmk} is also bounded. Again, since limk→∞‖smk−wmk‖=0, then {wmk} is also bounded and {ηmk}≥{μL,η1}. From (3.12), we have
Since limk→∞‖smk−wmk‖=0 and G is L-Lpischitz continuous on H, we have
limk→∞‖Gsmk−Gwmk‖=0.
(3.15)
By limk→∞‖smk−wmk‖=0, (3.13) and (3.15), (3.14) reduces to
lim infk→∞⟨Gwmk,u−wmk⟩≥0,∀u∈M.
(3.16)
Next, we show that v⋆∈VI(M,G). To show this, we choose a decreasing sequence {ξk} of positive numbers which approaches zero. For each k, let Nk stand for the smallest positive integer fulfilling the following inequality:
⟨Gwmj,u−wmj⟩+ξk≥0,∀j≥Nk.
(3.17)
It is not hard to see that the sequence {Nk} increases as {ξk} decreases. Moreover, since wNk⊂M, for each k, we can assume that GwNk≠0 (otherwise, wNk is a solution). Putting
gNk=GwNk‖GwNk‖2,
we get ⟨GwNk,gNk⟩=1, for each k. We can infer from (3.17) that for each k
⟨GwNk,u+ξkgNk−wNk⟩≥0.
Now, owing to the pseudo-monotonicity of G on H, we have
We now have to show that limk→∞ξkgNk=0. Indeed, by the fact that smk⇀v⋆ and limk→∞‖smk−wmk‖=0, we have wNk⇀v⋆ as k→∞. Since the norm mapping is sequentially weakly lower semicontinuous, we have
which implies that limk→∞ξkgNk=0. Now, owing to the fact that G is Lipschitz continuous, {wmk}, {gNk} are bounded, and limk→∞ξkgNk=0, then letting k→∞ in (3.18), we obtain
Next, the strong convergence theorem of Algorithm 3.1 is established as follows:
Theorem 3.1.Suppose the conditions (C1)–(C8) are performed and {um} is the sequence generated by Algorithm 3.1, then {um} converges strongly to an element u⋆∈F(S)∩VI(M,G), where u⋆=PF(S)∩VI(M,G)∘f(u⋆).
Proof. We divide the proof into four parts as follows:
where Γ8=supm∈N{‖um−u⋆‖,θ‖um−um−1‖} and Γ9=supm∈N{‖um−u⋆‖,ϕ‖um−um−1‖}.
Claim 4. The sequence {‖um−u⋆‖2} converges to zero. Indeed, from (3.52), Remark 3.1 and Lemma 2.5, it is enough to show that lim supk→∞⟨f(u⋆)−u⋆,umk+1−u⋆⟩≤0 for any subsequence of {‖umk−u⋆‖2} of {‖um−u⋆‖2} fulfilling
lim infk→∞(‖umk+1−u⋆‖2−‖umk−u⋆‖2)≥0.
(3.56)
Now, we assume that ‖umk−u⋆‖2 is a subsequence of ‖um−u⋆‖2 such that (3.56) holds, then
Thus, we have smkj⇀q⋆ since limk→∞‖smk−umk‖=0. Since limk→∞‖smk−wmk‖=0, it follows from Lemma 3.2 that q⋆∈VI(M,G). From (3.63), it follows that zmkj⇀q⋆. Following the demiclosedness of I−S at zero as defined in Lemma 2.4, we know that q⋆∈F(S). Thus, q⋆∈F(S)∩VI(M,G). By combining (3.68), q⋆∈F(S) and u⋆=PF(S)∩VI(M,G)∘f(u⋆), we get
By Claim 3, Remark 1, (3.70), and Lemma 2.5, we obtain that limm→∞‖um−u⋆‖=0, and this completes the proof of Theorem 3.1. □
Next, we propose our second and third algorithms as in Algorithms 3.2 and 3.3, which differ slightly from Algorithm 3.1.
Algorithm 3.2.
Initialization: Choose η1>0,ϕ>0,θ>0,ρ∈(0,2),μ∈(0,1) and let g0,g1∈H be arbitrary.
Iterative Steps: Given the iterates um−1 and {um}(m≥1), calculate um+1 as follows:
Step 1: Choose ϕm and θm such that 0≤ϕm≤ˉϕm and 0≤θm≤ˉθm, where ˉϕm and ˉθm are as defined in (3.1) and (3.2). Step 2: Set sm=um+ϕm(Kum−Kum−1),rm=um+θm(Jum−Jum−1), and compute wm=PM(sm−ηmGsm). If sm=wm or Gsm=0, stop, sm is a solution of the VIP. Otherwise, do Step 3. Step 3: Compute zm=PTm(sm−ρηmδmGwm), where Tm, δm and vm are as defined in (3.7)–(3.9). Step 4: Compute um+1=αmf(um)+βmzm+γmSzm. Update ηm+1 by (3.11). Set m:=m+1 and go back to Step 1.
Algorithm 3.3.
Initialization: Choose η1>0,ϕ>0,θ>0,ρ∈(0,2),μ∈(0,1) and let g0,g1∈H be arbitrary.
Iterative Steps: Given the iterates um−1 and {um}(m≥1), calculate um+1 as follows:
Step 1: Choose ϕm and θm such that 0≤ϕm≤ˉϕm and 0≤θm≤ˉθm, where ˉϕm and ˉθm are as defined in (3.1) and (3.2). Step 2: Set sm=um+ϕm(Kum−Kum−1),rm=um+θm(Jum−Jum−1), and compute wm=PM(sm−ηmGsm). If sm=wm or Gsm=0, stop, sm is a solution of the VIP. Otherwise, do Step 3. Step 3: Compute zm=PTm(sm−ρηmδmGwm), where Tm, δm and vm are as defined in (3.7)–(3.9). Step 4: Compute um+1=αmf(sm)+βmzm+γmSzm. Update ηm+1 by (3.11). Set m:=m+1 and go back to Step 1.
Remark 3.2.In Algorithm 3.2, we replace the term f(zm) in (3.10) of Algorithm 3.1 with f(um). Also, in Algorithm 3.3, we replace the term f(zm) in (3.10) of Algorithm 3.1 with f(sm). Now, the strong convergence theorems of Algorithms 3.2 and 3.3 will be stated without proofs. Their proofs are very similar to that of Theorem 3.1. Hence, we leave the proofs for the reader to verify.
Theorem 3.2.Suppose the conditions (C1)–(C8) are performed and {um} is the sequence generated by Algorithm 3.2, then {um} converges strongly to an element u⋆∈F(S)∩VI(M,G), where u⋆=PF(T)∩VI(M,G)∘f(u⋆).
Theorem 3.3.Suppose the conditions (C1)–(C8) are performed and {um} is the sequence generated by Algorithm 3.3, then {um} converges strongly to an element u⋆∈F(S)∩VI(M,G), where u⋆=PF(T)∩VI(M,G)∘f(u⋆).
4.
Number experiments
In this part of the work, we consider two numerical examples to demonstrate the computational efficiency of our Algorithms 3.1–3.3 (shortly, OAUAN Algs. 3.1, 3.7 and 3.8) over some existing modified algorithms, namely, Algorithms 1 and 2 of Thong and Hieu [43] (shortly, TH Alg. 1 and TH Alg. 2), Algorithm 2 of Tian and Tong [47] (shortly, TT Alg. 2), Algorithm 3.1 of Ogwo et al. [33] (shortly, OAM Alg. 3.1), Algorithm 3.1 of Godwin et al. [14] (shortly, GAMY Alg 3.1), and Algorithm 3.1 of Maluleka et al. [24] (shortly, MUA Alg 3.1). We perform all numerical simulations using MATLAB R2020b and carried out on PC Desktop IntelⓇ CoreTM i7-3540M CPU @ 3.00GHz × 4 memory 400.00GB.
Example 4.1.Suppose that G:Rk→Rk(k=30,50,80,110) is defined by G(u)=Qu+q, where q∈Rk and Q=AAT+B+C, C is a k×k diagonal matrix whose diagonal terms are nonnegative (hence Q is positive symmetric definite), B is a k×k skew-symmetric, and A is a k×k matrix. We define the feasible set M by
M={u∈Rk:−5≤ui≤5,i=1,⋯k}.
It is not hard to see that the mapping G is monotone and L-Lipschitz continuous with L=‖Q‖ (hence, G is pseudo-monotone). For q=0, the solution set VI(M,G)={0}. On the other hand, let Su=34usin‖u‖. Clearly, the only fixed point of S is 0, i.e., F(S)={0}. The mapping S is quasi-nonexpansive but not nonexpansive. Indeed, for k=1, we have
|Su−0|=|34usin|u||≤|3u4|≤|u|=|u−0|,∀u∈M.
Hence, S is quasi-nonexpansive. Moreover, if we take u=2π and v=3π2, then we have
|Su−Sv|=|6π4sin2π−9π8sin3π2|=9π8>π2=|u−v|.
Therefore, S is not quasinonexpansive. Notice that I−S is demiclosed at 0 and F(S)∩VI(M,G)={0}≠∅. Furthermore, we take Ku=sinu, where for k>1, sinu=(sinu1,sinu2,…,sinuk)T and Ju=u2.
The parameters for all the algorithms are taken as follows:
● For Algorithms 3.1–3.3, we take η1=0.9, μ=0.4, αm=12m+20, βm=γm=m2m+20, pm=1(m+100)1.1, qm=m+1m, hm=1m+100, ϕ=0.6, θ=0.9, ρ=0.0001 and ϵm=1(2m+1)3.
● For TH Algs. 1 and 2 γ=2,l=0.5,τ1=0.8,αm=0.5, βm=0.5, μ=0.6.
● For Algorithm 2 of Tian and Tong [47] (TT Alg.), we take αm=0.5, βm=0.5, μ=0.4 and λ1=17.
● For Algorithm 3.1 of Godwin et al. [14] (GAMY Alg. 3.1), we take α=4, λ1=0.5,θm=ˉθmδ=0.4c′(x)=2x, ϕm=2m+15m+2, βm=2m3m+2, γ=1, γm=(23m+1)2, αm=(23m+1, μ=0.8, Dx=Tx=0.5x and f(x)=13x.
● For Algorithm 3.1 of Maluleka et al. [24] (MUA Alg. 3.1), we take θ=0.9, λ1=3.1,μm=1(m+1)2αm=1m+1, βm=0.5 and ρ=0.5.
● For Algorithm 3.2 of Ogwo et al. [33] (OAM Alg. 3.1), we take α=3, λ1=0.5,αm=ˉαmμ=0.4, βm=mm+10, γ1=0.01, τm=(1(m+1)2, θm=1m+10, Dx=0.01x and f(x)=0.01x.
In this example, all entries A, B and C are taken randomly from [1, 100]. We consider 4 different dimensions for k, Case I: k=50, Case II: k=100, Case III: k=300, Case IV: k=500. The initial values u1=u2 are chosen at random using randn(k,1) in MATLAB and stopping criterion is taken as ‖um+1−um‖≤10−8. The results of the numerical simulations are presented in Table 1 and Figures 1 and 2.
Table 1.
Numerical Results for the four dimensions considered in Example 4.1.
Example 4.2.Let H=ℓ2, i.e., H={u=(u1,u2,u3,⋯,ui,⋯):∞∑i=1|ui|2<+∞}. Let e,d∈R be such that d>e>d2>0. Let M={u∈ℓ2:‖u‖≤e} and Gu=(d−‖u‖)u. Obviously, the solution set VI(M,G)={0}. Now, we show that G is L-Lipschitz continuous on H and pseudo-monotone on M. Indeed, for any u,v∈H, we have
Hence, G is Lipschitz continuous with L=d+2e. Now, let u,v∈M be such that ⟨Gu,v−u⟩>0, then we have (d−‖u‖)⟨u,v−u⟩>0. Since ‖u‖≤e≤d, we have ⟨u,v−u⟩>0. Hence,
This shows that G is a pseudo-monotone mapping. If we set e=3 and d=5, the projection formula is defined by
PM={u,if‖u‖≤3,%3u‖u‖,otherwise.
(4.1)
Now, let Su=u2. It is not hard to show that the mapping S is nonexpansive (hence, quasi-nonexpansive). We see that F(S)={0}≠∅. Thus, F(S)∩VI(M,G). We take the stopping criterion as ‖um+1−um‖≤10−8. Furthermore more, we maintain the same control parameters as in Example 4.1. Since we cannot sum to infinity in MATLAB, we considered the subspace of ℓ20 consisting of finite nonzero terms defined by
ℓ20(R)={u1∈ℓ2:u1=(u1,1,u1,2,u1,3,…,u1,i,0,0,…)}, for some i≥1.
The first i points of the initial points are generated randomly considering the following cases for i: Case I: i=100, Case II: i=1,000, Case III: i=10,000, Case IV: i=100,000. We use the same control parameters used in the previous example for all the algorithms. The results of the numerical simulations are presented in Table 2 and Figures 3 and 4.
Table 2.
Numerical results for the four dimensions considered in Example 4.2.
Remark 4.1.After conducting numerical simulations in Examples 4.1 and 4.2 our proposed Algorithms 3.1–3.3 have exhibited a competitive nature and potential when compared to existing algorithms. They outperformed Algorithms 1 and 2 of Thong and Hieu [43], Algorithm 2 of Tian and Tong [47], Algorithm 3.1 of Ogwo et al. [33], Algorithm 3.1 of Godwin et al. [14], and Algorithm 3.1 of Maluleka et al. [24] in terms of computational time and the number of iterations required to meet the specified stopping criteria, highlighting their superior performance.
5.
Application to optimal control problems
In this section, the solution of variational inequality problem arising from optimal control problem is approximated by our Algorithm 3.1. Let 0<T∈R, then we denote the Hilbert space of the square integrable by L2([0,1],Rk), measurable vector function s:[0,T]→Rm induced with the inner product
⟨s,r⟩=∫T0⟨s(g),r(g)⟩dg,
and norm
‖s‖2=√⟨s,s⟩<∞.
Now, the following optimal control problem will be considered on [0, T]:
s∗(g)=argmin{ζ(s):s∈S},
(5.1)
supposing such control exists. Note that S denotes the set of admissible controls, which takes the form an k-dimensional box and is made up of a piecewise continuous function:
Particularly, the control can be piecewise constant function (bang-bang).
The terminal objective can be expressed as:
ζ(s)=θ(u(T)),
where θ is a differentiable and convex function defined on the attainability set. If the trajectory u(z)∈L2([0,1]) fulfills constrains in the form of a linear differential equation system:
˙u(g)=D(z)u(g)+B(g)s(g),u(0)=u0,z∈[0,T],
(5.2)
where D(g)∈Rm×m and B(g)∈Rm×k are matrices which are continuous for all z∈[0,T]. Using the Pontryagin maximum principle, we know that a function x∗∈L2([0,1]) exists with the triple (u∗,x∗,s∗) solving the following system for a.e. z∈[0,T]:
{˙u∗(g)=D(g)u∗(z)+B(g)s∗(z),%u∗(0)=u0,
(5.3)
{˙x∗(g)=−D(g)Tx∗(z),x∗(0)=▽ζ(u(T)),
(5.4)
0∈B(g)Tx∗(g)+NS(s∗(g)),
(5.5)
where NS(s) is the normal cone to S at s defined by
NS(s)={∅,ifs∉S,{ℓ∈H:⟨ℓ,r−s⟩≤0∀s∈S},ifs∈S.
(5.6)
Letting Fs(g)=B(z)Tx(g), where Fs is shown by Khoroshilova [20] to be the gradient of objective cost function ζ. The express (5.4) can be expressed as a variational inequality problem as follows:
⟨Fs∗,r−s∗⟩≥0,∀r∈S.
(5.7)
Next, we discretize the continuous function and also take a natural number N with the mesh size h=TN. Furthermore, we identify any discretized control sN=(s0,s1,⋯,sN) with its piecewise constant extension:
sN(g)=sj,∀g∈[gj,gj+1),j=0,1,⋯,N−1.
Again, any discretized state uN=(u0,u1,⋯,uN) is identified with its piecewise linear interpolation
uN(g)=uj+g−gjh(uj+1−uj),g∈[gj,gj+1),j=0,1,⋯,N−1.
(5.8)
The same approach can be used to identify the co-state variable xN=(x0,x1,⋯,xN).
The system of ordinary differential equations (ODEs) (5.3) and (5.4) will be solved by the Euler method [49]
{uNj+1=uNj+h[D(gi)uNj+B(gj)sNj],u(0)=0,
(5.9)
{xNi=xNj+1+hD(gi)TxNj+1,x(N)=▽θ(u(N)).
(5.10)
Next, we solve use Algorithm 3.1 to solve the problem in the following example:
The exact solution of the problem in Example 5.1 is
s∗={1,ifg∈[0,1.2),−1,ifg∈[1.2,2].
The initial controls s0(t)=s1(t) are randomly taken in [-1, 1]. For this, we use the same parameters defined in Example 4.1 and set Su=u2. The stopping criterion for this section is ‖um+1−um‖≤10−7. The approximate optimal control and the corresponding trajectories of Algorithm 3.1 are shown in Figure 5.
Figure 5.
Random initial control (green) and optimal control (purple) on the left and optimal trajectories on the right for Example 5.1 generated by Algorithm 3.1.
It is noticed that images are, in most cases distorted by the process of acquisition. The purpose of the restoration technique for distorted images is to restore the original image from the noisy observation of it. The image restoration problem can be modeled as the following undetermined system of the linear equation:
v=Fu+w,
(6.1)
where F:RN→RM(M<N) is a bounded linear operator, u∈RN is an original image and v∈RM is the observed image with noise w. It is well-known that the solution of the model (6.1) is equivalent the solution of the (LASSO) problem as follows [39]:
minu∈RN{k‖u‖1+12‖v−Fu‖22},
(6.2)
where k>0. It is worthy to know that according [40], one can reconstruct the LASSO problem (6.2) as a variational inequality problem by letting Gu=FT(Fu−v). For this, G is monotone (hence G is pseudomonotone) and Lipschitz continuous with L=‖FTF‖.
Now, we compare the restoration efficiency of our suggested Algorithms 3.1–3.3 (shortly, OAUAN Algs. 3.1, 3.7 and 3.8) with Algorithms 1 and 2 of Thong and Hieu [43] (shortly, TH Alg. 1 and TH Alg. 2), and Algorithm 2 of Tian and Tong [47] (shortly, TT Alg. 2), Algorithm 3.1 of Ogwo et al. [33] (shortly, OAM Alg. 3.1), Algorithm 3.1 of Godwin et al. [14] (shortly, GAMY Alg. 3.1), and Algorithm 3.1 of Maluleka et al. [24], (shortly, MUA Alg. 3.1). The test images are Austine and Peacock of sizes 289×350 and 245×245, respectively. The images went through a Gaussian blur of size 9×9 and standard deviation of σ=4. The performances of the algorithms are measured via signal-to-noise ratio (SNR) defined by
SNR=25log10(‖u‖2‖u−u∗‖2),
(6.3)
where u∗ is the restored image and u is the original image. In this experiment, we maintain the same parameters used for all the algorithms in Example 4.1 with stopping criterion Em=‖um+1−um‖≤10−5. The numerical results for this experiment are shown in Figures 6–9 and Tables 3–6.
Figure 6.
Austine's image deblurring by various algorithms.
It is well-known that the higher the SNR value of an algorithm, the better the quality of the image it restores. From Figures 6–9 and Tables 3–6, it is evident that our Algorithms 3.1–3.3 restored the blurred images better than Algorithms 1 and 2 of Thong and Hieu [43], and Algorithm 2 of Tian and Tong [47], Algorithm 3.1 of Ogwo et al. [33], Algorithm 3.1 of Godwin et al. [14], and Algorithm 3.1 of Maluleka et al. [24]. Hence, our algorithms are more effective and applicable than many existing methods.
7.
Conclusions
In this work, we have introduced three novel iterative algorithms for finding the common solution of quasi-nonexpansive FPP and pseudo-monotone variational inequality problems. Our algorithms embed double inertial steps which accelerate their convergence rates. Numerical experiments have shown that our algorithms outperformed several existing algorithms with single or no inertial terms. Further, we a considered a new self-adaptive step size technique that produces a non-monotonic sequence of step sizes while also correctly incorporating a number of well-known step sizes. The step size is designed to lessen the algorithms' reliance on the initial step size. Numerical tests were performed, and the results showed that our step size is more effective and that it guarantees that our methods require less execution time. Our convergence results were obtained without the imposition of stringent conditions on the control parameters. The class of pseudo-monotone operators, which has been studied in the work, is more general than the class of monotone operators which has been studied in [43,47] and several other articles. To test the applicability and efficiencies of our methods in solving real-world problems, we utilized the methods to solve optimal control and image restorations problems.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this work through the projection number (PSAU/2023/01/8980).
Conflict of interest
The authors declare that they have no conflict of interest.
References
[1]
J. H. Holland, Genetic algorithms, Sci. Am., 267 (1992), 66-72.
[2]
J. Kennedy, R. Eberhart, Particle swarm optimization, Perth, WA, Australia: Proceedings of IEEE International Conference on Neural Networks, 1995.
[3]
R. Storn, K. Price, Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., 11 (1997), 341-359. doi: 10.1023/A:1008202821328
[4]
K. V. Price, Differential evolution: A fast and simple numerical optimizer, Berkeley, CA, USA: Proceedings of North American Fuzzy Information Processing, 1996.
[5]
D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm, J. Global Optim., 39 (2007), 459-471. doi: 10.1007/s10898-007-9149-x
[6]
C. R. Hwang, Simulated annealing: Theory and applications, Acta Appl. Math., 12 (1988), 108- 111.
[7]
A. Faramarzi, M. Heidarinejad, S. Mirjalili, A. H. Gandomi, Marine Predators algorithm: A natureinspired metaheuristic, Expert Syst. Appl., 152 (2020), 113377. doi: 10.1016/j.eswa.2020.113377
[8]
S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46-61.
[9]
S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51-67.
[10]
S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114 (2017), 163-191. doi: 10.1016/j.advengsoft.2017.07.002
[11]
R. V. Rao, V. J. Savsani, D. P. Vakharia, Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems, Comput.-Aided Des., 43 (2011), 303-315. doi: 10.1016/j.cad.2010.12.015
[12]
D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput., 1 (1997), 67-82. doi: 10.1109/4235.585893
[13]
M. Liu, X. Yao, Y. Li, Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems, Appl. Soft Comput., 87 (2020), 105954. doi: 10.1016/j.asoc.2019.105954
[14]
D. Tansui, A. Thammano, Hybrid nature-inspired optimization algorithm: Hydrozoan and sea turtle foraging algorithms for solving continuous optimization problems, IEEE Access, 8 (2020), 65780- 65800.
[15]
H. Garg, A hybrid GSA-GA algorithm for constrained optimization problems, Inf. Sci., 478 (2019), 499-523. doi: 10.1016/j.ins.2018.11.041
[16]
D. T. Le, D. K. Bui, T. D. Ngo, Q. H. Nguyen, H. Nguyen-Xuan, A novel hybrid method combining electromagnetism-like mechanism and firefly algorithms for constrained design optimization of discrete truss structures, Comput. Struct., 212 (2019), 20-42.
[17]
N. E. Humphries, N. Queiroz, J. R. M. Dyer, N. G. Pade, M. K. Musyl, K. M. Schaefer, et al., Environmental context explains Lévy and Brownian movement patterns of marine predators, Nature, 465 (2010), 1066-1069.
[18]
F. Bartumeus, J. Catalan, U. L. Fulco, M. L. Lyra, G. M. Viswanathan, Erratum: Optimizing the encounter rate in biological interactions: Lévy versus brownian strategies, Phys. Rev. Lett., 89 (2002), 109902. doi: 10.1103/PhysRevLett.89.109902
[19]
M. A. A. Al-qaness, A. A. Ewees, H. Fan, L. Abualigah, M. A. Elaziz, Marine Predators algorithm for forecasting confirmed cases of COVID-19 in Italy, USA, Iran and Korea, Int. J. Environ. Res. Public Health, 17 (2020), 3520.
[20]
D. Yousri, T. S. Babu, E. Beshr, M. B. Eteiba, D. Allam, A robust strategy based on marine predators algorithm for large scale photovoltaic array reconfiguration to mitigate the partial shading effect on the performance of PV system, IEEE Access, 8 (2020), 112407-112426.
[21]
M. Abdel-Basset, R. Mohamed, M. Elhoseny, R. K. Chakrabortty, M. Ryan, A hybrid COVID- 19 detection model using an improved Marine Predators algorithm and a ranking-based diversity reduction strategy, IEEE Access, 8 (2020), 79521-79540.
[22]
M. A. Elaziz, A. A. Ewees, D. Yousri, H. S. N. Alwerfali, Q. A. Awad, S. Lu, et al., An improved Marine Predators algorithm with fuzzy entropy for multi-level thresholding: Real world example of COVID-19 CT image segmentation, IEEE Access, 8 (2020), 125306-125330.
[23]
R. V. Rao, V. Patel, Multi-objective optimization of heat exchangers using a modified teachinglearning-based optimization algorithm, Appl. Math. Modell., 37 (2013), 1147-1162. doi: 10.1016/j.apm.2012.03.043
[24]
R. V. Rao, V. Patel, An improved Teaching-learning-based optimization algorithm for solving unconstrained optimization problems, Sci. Iran., 20 (2013), 710-720.
[25]
A. R. Yildiz, Optimization of multi-pass turning operations using hybrid teaching learning-based approach, Int. J. Adv. Manuf. Technol., 66 (2013), 1319-1326. doi: 10.1007/s00170-012-4410-y
[26]
K. Yu, X. Wang, Z. Wang, An improved Teaching-learning-based optimization algorithm for numerical and engineering optimization problems, J. Intell. Manuf., 27 (2016), 831-843. doi: 10.1007/s10845-014-0918-3
[27]
E. Uzlu, M. Kankal, A. Akpınar, T. Dede, Estimates of energy consumption in Turkey using neural networks with the Teaching-learning-based optimization algorithm, Energy, 75 (2014), 295-303.
[28]
V. Toǧan, Design of planar steel frames using teaching-learning based optimization, Eng. Struct., 34 (2012), 225-232.
[29]
R. V. Rao, V. D. Kalyankar, Parameter optimization of modern machining processes using teachinglearning-based optimization algorithm, Eng. Appl. Artif. Intell., 26 (2013), 524-531. doi: 10.1016/j.engappai.2012.06.007
[30]
M. Singh, B. K. Panigrahi, A. R. Abhyankar, S. Das, Optimal coordination of directional overcurrent relays using informative differential evolution algorithm, J. Comput. Sci., 5 (2014), 269-276. doi: 10.1016/j.jocs.2013.05.010
[31]
H. Bouchekara, M. A. Abido, M. Boucherma, Optimal power flow using Teaching-learning-based optimization technique, Electr. Power Syst. Res., 114 (2014), 49-59. doi: 10.1016/j.epsr.2014.03.032
[32]
G. M. Viswanathan, V. Afanasyev, S. V. Buldyrev, E. J. Murphy, P. A. Prince, H. E. Stanley, Lévy flight search patterns of wandering albatrosses, Nature, 381 (1996), 413-415.
[33]
J. D. Filmalter, L. Dagorn, P. D. Cowley, M. Taquet, First descriptions of the behavior of silky sharks, Carcharhinus falciformis, around drifting fish aggregating devices in the Indian Ocean, Bull. Mar. Sci., 87 (2011), 325-337.
[34]
E. Clark, Instrumental conditioning of lemon sharks, Science (New York, N.Y.), 130 (1959), 217-218.
[35]
L. A. Dugatkin, D. S. Wilson, The prerequisites for strategic behaviour in bluegill sunfish, Lepomis macrochirus, Anim. Behav., 44 (1992), 223-230.
[36]
V. Schluessel, H. Bleckmann, Spatial learning and memory retention in the grey bamboo shark (Chiloscyllium griseum), Zoology, 115 (2012), 346-353.
[37]
D. W. Zimmerman, B. D. Zumbo, Relative power of the Wil-coxon test, the Friedman test, and repeated-measures ANOVA on ranks, J. Exp. Educ., 62 (1993), 75-86.
[38]
S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114 (2017), 163-191.
[39]
C. A. C. Coello, E. M. Montes, Constraint-handling in genetic algorithms through the use of dominance-based tournament selection, Adv. Eng. Inf., 16 (2002), 193-203. doi: 10.1016/S1474-0346(02)00011-3
[40]
E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: A gravitational search algorithm, Inf. Sci., 179 (2009), 2232-2248. doi: 10.1016/j.ins.2009.03.004
[41]
H. Eskandar, A. Sadollah, A. Bahreininejad, M. Hamdi, Water cycle algorithm-a novel metaheuristic optimization method for solving constrained engineering optimization problems, Comput. Struct., 110 (2012), 151-166.
[42]
R. A. Krohling, L. dos Santos Coelho, Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems, IEEE Trans. Syst., Man, Cybern., Part B (Cybernetics), 36 (2006), 1407-1416.
[43]
K. M. Ragsdell, D. T. Phillips, Optimal design of a class of welded structures using geometric programming, J. Eng. Ind., 98 (1976), 1021-1025. doi: 10.1115/1.3438995
[44]
P. Savsani, V. Savsani, Passing vehicle search (PVS): A novel metaheuristic algorithm, Appl. Math. Model., 40 (2016), 3951-3978. doi: 10.1016/j.apm.2015.10.040
[45]
M. Dorigo, T. Stützle, Ant colony optimization: Overview and recent advances, 2Eds., Cham, Switzerland: Springer International Publishing, 2019.
[46]
S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51-67.
[47]
S. Mirjalili, SCA: A sine cosine algorithm for solving optimization problems, Knowl.-Based Syst., 96 (2016), 120-133. doi: 10.1016/j.knosys.2015.12.022
[48]
H. Beyer, H. Schwefel, Evolution strategies-A comprehensive introduction, Nat. Comput., 1 (2002), 3-52. doi: 10.1023/A:1015059928466
[49]
R. Moghdani, K. Salimifard, Volleyball premier league algorithm, Appl. Soft Comput., 64 (2018), 161-185. doi: 10.1016/j.asoc.2017.11.043
[50]
H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization, Appl. Soft Comput., 10 (2010), 629-640. doi: 10.1016/j.asoc.2009.08.031
[51]
K. Thirugnanasambandam, S. Prakash, V. Subramanian, S. Pothula, V. Thirumal, Reinforced cuckoo search algorithm-based multimodal optimization, Appl. Intell., 49 (2019), 2059-2083. doi: 10.1007/s10489-018-1355-3
[52]
K. Deb, GeneAS: A robust optimal design technique for mechanical component design, 2Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 1997.
[53]
M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algorithm for solving optimization problems, Appl. Math. Comput., 188 (2007), 1567-1579.
[54]
E. Mezura-Montes, C. A. C. Coello, R. Landa-Becerra, Engineering optimization using simple evolutionary algorithm, Sacramento, CA, USA: Proceedings. 15th IEEE International Conference on Tools with Artificial Intelligence, 2003.
[55]
T. Ray, P. Saini, Engineering design optimization using swarm with an intelligent information sharing among individuals, Eng. Optim., 33 (2001), 735-748. doi: 10.1080/03052150108940941
[56]
A. D. Belegundu, J. S. Arora, A study of mathematical programming methods for structural optimization. Part I: Theory, Int. J. Numer. Methods Eng., 21 (1985), 1583-1599. doi: 10.1002/nme.1620210904
[57]
T. Ray, K. M. Liew, Society and civilization: An optimization algorithm based on the simulation of social behavior, IEEE Trans. Evol. Comput., 7 (2003), 386-396. doi: 10.1109/TEVC.2003.814902
[58]
Q. Zhang, H. Chen, A. A. Heidari, X. Zhao, Y. Xu, P. Wang, et al., Chaos-induced and mutationdriven schemes boosting salp chains-inspired optimizers, IEEE Access, 7 (2019), 31243-31261.
[59]
A. Kaveh, M. Khayatazad, A new meta-heuristic method: Ray optimization, Comput. Struct., 112 (2012), 283-294.
[60]
F. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for constrained optimization, Appl. Math. Comput., 186 (2007), 340-356.
[61]
E. M. Montes, C. A. C. Coello, An empirical study about the usefulness of evolution strategies to solve constrained optimization problems, Int. J. Gen. Syst., 37 (2008), 443-473. doi: 10.1080/03081070701303470
[62]
S. Mirjalili, Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm, Knowl.-Based Syst., 89 (2015), 228-249. doi: 10.1016/j.knosys.2015.07.006
[63]
J. S. Arora, Introduction to optimum design, New York: McGraw-Hill Book Co., 1989.
[64]
A. A. Heidari, R. A. Abbaspour, A. R. Jordehi, An efficient chaotic water cycle algorithm for optimization tasks, Neural Comput. Appl., 28 (2017), 57-85. doi: 10.1007/s00521-015-2037-2
[65]
A. H. Gandomi, X. S. Yang, A. H. Alavi, S. Talatahari, Bat algorithm for constrained optimization tasks, Neural Comput. Appl., 22 (2013), 1239-1255. doi: 10.1007/s00521-012-1028-9
[66]
H. Rosenbrock, An automatic method for finding the greatest or least value of a function, Comput. J., 3 (1960), 175-184. doi: 10.1093/comjnl/3.3.175
[67]
L. dos Santos Coelho, Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems, Expert Syst. Appl., 37 (2010), 1676-1683.
[68]
E. Mezura-Montes, C. A. C. Coello, A simple multimembered evolution strategy to solve constrained optimization problems, IEEE Trans. Evol. Comput., 9 (2005), 1-17. doi: 10.1109/TEVC.2004.836819
[69]
F. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for constrained optimization, Appl. Math. Comput., 186 (2007), 340-356.
[70]
M. Montemurro, A. Vincenti, P. Vannucci, The automatic dynamic penalisation method (ADP) for handling constraints with genetic algorithms, Comput. Methods Appl. Mech. Eng., 256 (2013), 70-87. doi: 10.1016/j.cma.2012.12.009
[71]
K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA J., 29 (1991), 2013-2015. doi: 10.2514/3.10834
[72]
A. Kaveh, S. Talatahari, A novel heuristic optimization method: Charged system search, Acta Mech., 213 (2010), 267-289. doi: 10.1007/s00707-009-0270-4
[73]
B. K. Kannan, S. N. Kramer, An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design, J. Mech. Des., 116 (1994), 405-411. doi: 10.1115/1.2919393
[74]
C. Ozturk, E. Hancer, D. Karaboga, Dynamic clustering with improved binary artificial bee colony algorithm, Appl. Soft Comput., 28 (2015), 69-80. doi: 10.1016/j.asoc.2014.11.040
[75]
X. Kong, L. Gao, H. Ouyang, S. Li, A simplified binary harmony search algorithm for large scale 0-1 knapsack problems, Expert Syst. Appl., 42 (2015), 5337-5355. doi: 10.1016/j.eswa.2015.02.015
[76]
M. Abdel-Basset, D. El-Shahat, H. Faris, S. Mirjalili, A binary multi-verse optimizer for 0-1 multidimensional knapsack problems with application in interactive multimedia systems, Comput. Ind. Eng., 132 (2019), 187-206. doi: 10.1016/j.cie.2019.04.025
[77]
P. Brucker, R. Qu, E. K. Burke, Personnel scheduling: Models and complexity, Eur. J. Oper. Res., 210 (2011), 467-473. doi: 10.1016/j.ejor.2010.11.017
[78]
M. M. Solomon, Algorithms for the vehicle routing and scheduling problems with time window constraints, Oper. Res., 35 (1987), 254-265. doi: 10.1287/opre.35.2.254
[79]
P. V. Laarhoven, E. Aarts, J. K. Lenstra, Job shop scheduling by simulated annealing, Oper. Res., 40 (1992), 113-125. doi: 10.1287/opre.40.1.113
[80]
J. Zhang, J. Zhang, T. Lok, M. R. Lyu, A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training, Appl. Math. Comput., 185 (2007), 1026-1037.
[81]
K. Socha, C. Blum, An ant colony optimization algorithm for continuous optimization: Application to feed-forward neural network training, Neural Comput. Appl., 16 (2007), 235-247. doi: 10.1007/s00521-007-0084-z
[82]
K. Y. Leong, A. Sitiol, K. S. M. Anbananthen, Enhance neural networks training using GA with chaos theory, 3Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.
[83]
X. Wang, J. Yang, X. Teng, W. Xia, R. Jensen, Feature selection based on rough sets and particle swarm optimization, Pattern Recognit. Lett., 28 (2007), 459-471. doi: 10.1016/j.patrec.2006.09.003
[84]
M. Ghaemi, M. R. Feizi-Derakhshi, Feature selection using forest optimization algorithm, Pattern Recognit., 60 (2016), 121-129.
[85]
P. R. Varma, V. V. Kumari, S. S. Kumar, Feature selection using relative fuzzy entropy and ant colony optimization applied to real-time intrusion detection system, Procedia Comput. Sci., 85 (2016), 503-510. doi: 10.1016/j.procs.2016.05.203
This article has been cited by:
1.
Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain,
Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems,
2024,
9,
2473-6988,
17276,
10.3934/math.2024839
2.
Aisha Aminu Adam, Abubakar Adamu, Abdulkarim Hassan Ibrahim, Dilber Uzun Ozsahin,
Inertial Halpern-type methods for variational inequality with application to medical image recovery,
2024,
139,
10075704,
108315,
10.1016/j.cnsns.2024.108315
3.
Jacob Ashiwere Abuchu, Austine Efut Ofem, Godwin Chidi Ugwunnadi, Ojen Kumar Narain,
An inertial-type extrapolation algorithm for solving the multiple-sets split pseudomonotone variational inequality problem in real Hilbert spaces,
2024,
0,
2155-3289,
0,
10.3934/naco.2024056
4.
Habib ur Rehman, Kanokwan Sitthithakerngkiet, Thidaporn Seangwattana,
A Subgradient Extragradient Framework Incorporating a Relaxation and Dual Inertial Technique for Variational Inequalities,
2024,
13,
2227-7390,
133,
10.3390/math13010133
Chibueze Christian Okeke, Abubakar Adamu, Thembinkosi Eezy Kunene, Dilber Uzun Ozsahin,
Two-step inertial projection and contraction method for variational inequality with quasi-monotonicity,
2025,
74,
0009-725X,
10.1007/s12215-025-01211-x
7.
Duong Viet Thong, Vu Tien Dung, Hoang Thi Thanh Tam,
A Self Adaptive Projected Gradient Method for Solving Non-Monotone Variational Inequalities,
2025,
206,
0022-3239,
10.1007/s10957-025-02674-9
8.
Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain,
A novel method for solving split variational inequality and fixed point problems,
2025,
0003-6811,
1,
10.1080/00036811.2025.2505615