In this paper, by the method of upper and lower solutions coupled with the monotone iterative technique, we investigate the existence and uniqueness results of solutions for a periodic boundary value problem of nonlinear fractional differential equation involving conformable sequential fractional derivatives in Banach space. An example is given to illustrate our main result.
1.
Introduction
Nature-inspired algorithms are well-known algorithms for finding near-optimal solutions to optimization problems. These algorithms have successfully solved optimization problems in different domains [1,2]. The natural behaviors or events among living creatures inspire these nature-inspired algorithms Examples of such events include the mountain gazelle's social behavior, and the behavior of birds, monkeys, and other animals in the wildlife [3]. These algorithms have shown promise in solving real-world engineering problems and problems in many fields. Among the plethora of algorithms that have been designed by researchers over the years, new algorithms are still of interest among optimization enthusiasts to solve the common problem of these algorithms, which are mainly entrapped in local optima solutions and slow convergences. The Coot Optimization Algorithm (COA) has shown promise in solving these difficulties in classical nature-inspired algorithms. COA is inspired by the social lifestyle exhibited by coots. It is an effective and straightforward metaheuristic algorithm to find near-optimal solutions [4]. Despite the great prosperity of COA, it suffers from a slow convergence in solving very complex problems [5]. This weakness caused COA to require an extremely high number of iterations to produce substantially good results, especially on large-scale complex optimization problems [5,6].
The poor balance of the exploration and exploitation stages significantly contributes to the slow convergence [6,7]. While exploitation guarantees the refinement of solutions in attractive parts of the search space, exploration is essential for the global search and to prevent a premature convergence to the local optima [8]. An imbalance between these two stages can seriously impair the algorithm's effectiveness, thus resulting in wasted searches and higher processing expenses that ultimately yield less-than-ideal outcomes [9].
In the literature, scholars have employed a range of corrective measures to enhance the performance of certain algorithms that have comparable shortcomings [10,11]. A typical example is the truncation parameter selection technique, which was adopted to improve the general performance of the Mountain Gazelle Optimizer (MGO) on standard benchmark test functions [12]. To enhance the overall performance of the Gorilla Troop Optimizer (GTO), A. Bright et al. [13] integrated a step adaptive simulation concept into the GTO.
In the case of the COA, Aslan et al. [14] proposed a modification by integrating a randomized mutation technique into the original COA to enhance its global search ability. However, excessive randomization within algorithms causes significant instabilities. Additionally, R. R. Mostafa et al. [7] modified COA by introducing an opposition-based learning and an orthogonal-based learning approach to improve the algorithm's performance. Moreover, authors in [15] proposed control randomization and a transition factor-based strategy to enhance the COA. This modification was geared towards solving the battery parameters estimation problem. Hence, its performance in a variety of optimization fields was not established.
Similarly, this work seeks to modify the COA to improve its general performance through an adaptive sigmoid increasing inertia weight in the "Leader Movement" phase [16]. This is a crucial stage of the COA because it determines how the coots are led by their leader coot on the surface of the water. This influences the dynamics of the search as a whole [6]. The integrated adaptive weight is intended to dynamically balance the exploration and exploitation mechanisms throughout the search process.
To achieve this aim, the exploration is dominant at the early stages of the optimization search process and then gradually improves the exploitation mechanism in the later stages. Therefore, the adaptive sigmoid rising inertia weight is designed to begin with a smaller value and gradually increase [16].
The rest of the paper is organized as follows: the original COA and the proposed modification are presented in Section 2; the performance of the simulation's outcome is discussed in Section 3; and lastly, Section 4 concludes the paper by providing a thorough synopsis of the research and suggestions for potential future studies as recommendations.
2.
Adaptive sigmoid increasing inertia weight-based modification of COA
This section presents the methodological approach followed in this research, which consists of the original COA, the proposed modification, and the test implementation.
2.1. The original coot optimization algorithm
The COA mimics the behavior of American coots as they navigate through seas or lakes. American coots have four distinct movement strategies: random movement, chain formation, moving toward group leader positions, or leading the group [6]. The initial generation is created randomly using Eqs (1) and (2):
where CootPos denotes the position of the ith coot, d represents the dimension number, uob and lob signify the upper and lower boundaries, respectively, and rand denotes a random vector within the range [0, 1].
Random movement:
If coots exhibit random movement, then they will consequently migrate towards a position denoted as Q, which can be determined through Eq (3):
To prevent getting trapped in local optimal areas, if coots encounter a failure within a local region, then they will employ a position as determined through Eq (4):
where R2 ∈ [0, 1] and A can be calculated using Eq (5):
where L and Iter represent the current iteration and the maximum number of iterations, respectively.
Chain movement:
To mathematically represent the movement of the chain phase, we address the following equation, denoted as Eq (6):
Moving towards group leader:
When adjusting its position according to the leader's position, the coot's movement is governed by the following equation, denoted as Eq (7):
Here, i represents the total number of coots, NL represents the number of leaders, and K denotes a specific leader. The update process utilizes the following equation, referred to as Eq (8):
Leading the group by the leader (Leader movement):
Finally, to update their positions, the leaders employ the following Eq (9): to update their positions.
where gBest is the best position, R ∈ [−1, 1], both R3 and R4 ∈ [0, 1], B can be calculated using Eq (10):
where L represents the current iteration, and Iter represents the maximum number of iterations.
2.2. Proposed modification
The original COA has a great potential of being adopted for applications in various optimization fields [7]. However, the COA has a slow convergence, which makes it require a lot of iterations to produce a good optimization result [6]. This drawback is caused by a poor exploration and exploitation balance to ensure an efficient search.
To remedy this weakness, an adoptive sigmoid increasing inertia weight [16] is incorporated in the Leader Movement phase. This phase of the original COA is expressed in Eq (9), which indicates how the leader coots lead the coots' group to move on the water surface. The proposed weight is incorporated as shown in Eq (11):
where the value of the weight, ω, is calculated using the sigmoid increasing inertia weight expressed in Eq (12):
where a and b are parameters for adjustment, which are carefully chosen through numerical simulations. The weights ωmin and ωmax represent the minimum and maximum weight values, respectively, and i and MaxIter represent the current iteration and the maximum number of iterations, respectively.
Finally, ω(i) represents the adaptive weight value at the ith iteration. The proposed weight is integrated in Eq (11) when R4 ≥ 0.5 to gain the full benefit of the inertia weight technique while avoiding its drawback of a possible premature convergence. In algorithms with the inertia weight techniques, they might converge too quickly to suboptimal solutions, especially when the weight parameter is not tuned properly. However, they are good at providing an effective and fast convergence to global solutions when properly implemented. To tap the good qualities of the coot technique, the developers of the algorithm incorporated the weighting factor when R4 ≥ 0.5. In this contribution, the eight has been designed to ensure the versatility of the algorithm during each iteration while avoiding its weaknesses. Depending on the value of R4, Eq (11) is executed with the proposed weight, where the good qualities can be utilized, or without the weight, where the algorithm can freely search within the space without solely focusing on the temporarily best individual members of the population.
The implementation (pseudocode) of the modified COA (mCOA) is presented as shown in Algorithm 1:
The pseudo-code serves as a guide for the implementation of the proposed mCOA.
3.
Simulation test setup on benchmark functions
The proposed mCOA was tested on thirteen commonly used standard benchmark test functions to establish its performance [17]. The test functions consist of the same functions used in the literature of the original COA, the details of which can be found in the reference [6]. The algorithm was coded in the MATLAB software (R2018a), using an HP Pavilion laptop computer with AMD A-8-6410 APU, an AMD Radeon R5 Graphics processor of a 2.00GHz clock speed, a RAM size of 4.00GB, and a 64-bit operating system. Table 1 provides detailed information on the benchmark function.
The simulation parameter settings used in the simulation experiment are presented in Table 2. These contain all the parameter settings used in the simulation of the original COA in the literature to facilitate a fair comparison.
On each test function, the simulation was executed thirty (30) times, and some relevant statistical indicators were calculated, including the best value (Min), the worst value (Max), the mean value (Avg), and the standard deviation (Std). This depicts the possible best performance, the worst performance, the average performance, and the possible deviation when applied to a real-world optimization problem. Since no algorithm has been deemed globally optimal, relatively better-performing algorithms were searched for based on these statistical indicators [18].
To establish the efficacy of the proposed mCOA, the simulation results were compared to the original COA and some other state-of-the-art metaheuristic algorithms in [6], namely the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm. The comparison of simulation results on the thirteen (13) test functions are presented in Table 3, where the boldened numbers represent the best performances.
4.
Results
The results presented in Table 2 show the performance of the proposed mCOA compared to GA, PSO, and COA. It produced improvements for the GA, PSO, and COA on functions F1–F12. In these test functions, the mCOA produced the least values of the min, max, avg, and std, which indicates a better performance for minimization optimization problems, and represents ten (10) out of the thirteen (13) benchmark test functions tested. This represents an average performance of 76.92% on the test functions. For the cases of F6 and F13, the PSO outperformed the other algorithms, while the GA outperformed the other algorithms on F13.
The convergence of the two algorithms is presented in Figures 1–13, which indicate the convergence process from the first iteration to the last iteration. This provides a clearer comparison of the performances of the two algorithms to effectively justify the superior performance of the mCOA over the COA.
In F1–F4, F7, F9, and F10, the mCOA has better convergence characteristics than the COA with a great margin. It starts performing better effectively from the initial iteration to the final iteration, thus producing a better final optimization value on these benchmark functions. In the cases of F6, F8, and F11–F13, the mCOA produced a slightly better convergence, on average, than the COA. The two algorithms, mCOA and COA, produced fast convergences in F5.
5.
Conclusions
A modification of the COA was developed to improve its global performance by incorporating an adaptive sigmoid inertia weight-based technique in the exploration phase. The mCOA was tested on the same 13 standard benchmark test functions used for the original COA. The simulation outcome using the MATLAB software was compared to that of the COA, PSO algorithm, and the GA. The mCOA outperformed the other algorithms on most of the test functions, with a score of 10 out of the 13 functions, while maintaining a competitive performance on the other 3 test functions. Therefore, an enhanced version of the COA was developed for a better global performance. Based on the exceptional simulation performance of the mCOA, it is recommended for applications in real-life optimization problems, especially in the field of engineering without any reservation.
All tests were obtained by Matlab software publicly available in https://github.com/etwumasi/code_appendix.
Conflict of interest
All authors declare no conflict of interest regarding the publication of this paper.