Research article Special Issues

Tuning extreme learning machine by an improved electromagnetism-like mechanism algorithm for classification problem

  • Extreme learning machine (ELM) is a kind of learning algorithm for single hidden-layer feedforward neural network (SLFN). Compared with traditional gradient-based neural network learning algorithms, ELM has the advantages of fast learning speed, good generalization performance and easy implementation. But due to the random determination of input weights and hidden biases, ELM demands more hidden neurons and cannot guarantee the optimal network structure. Here, we report a new learning algorithm to overcome the disadvantages of ELM by tuning the input weights and hidden biases through an improved electromagnetism-like mechanism (EM) algorithm called DAEM and Moore-Penrose (MP) generalized inverse to analytically determine the output weights of ELM. In DAEM, three different solution updating strategies inspired by dragonfly algorithm (DA) are implemented. Experimental results indicate that the proposed algorithm DAEM-ELM has better generalization performance than traditional ELM and other evolutionary ELMs.

    Citation: Mengya Zhang, Qing Wu, Zezhou Xu. Tuning extreme learning machine by an improved electromagnetism-like mechanism algorithm for classification problem[J]. Mathematical Biosciences and Engineering, 2019, 16(5): 4692-4707. doi: 10.3934/mbe.2019235

    Related Papers:

    [1] Yong Xiong, Lin Pan, Min Xiao, Han Xiao . Motion control and path optimization of intelligent AUV using fuzzy adaptive PID and improved genetic algorithm. Mathematical Biosciences and Engineering, 2023, 20(5): 9208-9245. doi: 10.3934/mbe.2023404
    [2] Yongqiang Yao, Nan Ma, Cheng Wang, Zhixuan Wu, Cheng Xu, Jin Zhang . Research and implementation of variable-domain fuzzy PID intelligent control method based on Q-Learning for self-driving in complex scenarios. Mathematical Biosciences and Engineering, 2023, 20(3): 6016-6029. doi: 10.3934/mbe.2023260
    [3] Jiahao Zhang, Zhengming Gao, Suruo Li, Juan Zhao, Wenguang Song . Improved intelligent clonal optimizer based on adaptive parameter strategy. Mathematical Biosciences and Engineering, 2022, 19(10): 10275-10315. doi: 10.3934/mbe.2022481
    [4] Dongning Chen, Jianchang Liu, Chengyu Yao, Ziwei Zhang, Xinwei Du . Multi-strategy improved salp swarm algorithm and its application in reliability optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5269-5292. doi: 10.3934/mbe.2022247
    [5] Hong Lu, Hongxiang Zhan, Tinghua Wang . A multi-strategy improved snake optimizer and its application to SVM parameter selection. Mathematical Biosciences and Engineering, 2024, 21(10): 7297-7336. doi: 10.3934/mbe.2024322
    [6] Dashe Li, Xueying Wang, Jiajun Sun, Huanhai Yang . AI-HydSu: An advanced hybrid approach using support vector regression and particle swarm optimization for dissolved oxygen forecasting. Mathematical Biosciences and Engineering, 2021, 18(4): 3646-3666. doi: 10.3934/mbe.2021182
    [7] Sijie Wang, Shihua Zhou, Weiqi Yan . An enhanced whale optimization algorithm for DNA storage encoding. Mathematical Biosciences and Engineering, 2022, 19(12): 14142-14172. doi: 10.3934/mbe.2022659
    [8] Zijiao Zhang, Chong Wu, Shiyou Qu, Jiaming Liu . A hierarchical chain-based Archimedes optimization algorithm. Mathematical Biosciences and Engineering, 2023, 20(12): 20881-20913. doi: 10.3934/mbe.2023924
    [9] Huangjing Yu, Yuhao Wang, Heming Jia, Laith Abualigah . Modified prairie dog optimization algorithm for global optimization and constrained engineering problems. Mathematical Biosciences and Engineering, 2023, 20(11): 19086-19132. doi: 10.3934/mbe.2023844
    [10] Chengjun Wang, Xingyu Yao, Fan Ding, Zhipeng Yu . A trajectory planning method for a casting sorting robotic arm based on a nature-inspired Genghis Khan shark optimized algorithm. Mathematical Biosciences and Engineering, 2024, 21(2): 3364-3390. doi: 10.3934/mbe.2024149
  • Extreme learning machine (ELM) is a kind of learning algorithm for single hidden-layer feedforward neural network (SLFN). Compared with traditional gradient-based neural network learning algorithms, ELM has the advantages of fast learning speed, good generalization performance and easy implementation. But due to the random determination of input weights and hidden biases, ELM demands more hidden neurons and cannot guarantee the optimal network structure. Here, we report a new learning algorithm to overcome the disadvantages of ELM by tuning the input weights and hidden biases through an improved electromagnetism-like mechanism (EM) algorithm called DAEM and Moore-Penrose (MP) generalized inverse to analytically determine the output weights of ELM. In DAEM, three different solution updating strategies inspired by dragonfly algorithm (DA) are implemented. Experimental results indicate that the proposed algorithm DAEM-ELM has better generalization performance than traditional ELM and other evolutionary ELMs.


    A metaheuristic optimization algorithm is a novel population-based global search algorithm, and is more suitable for solving complex problems [1,2]. Rao et al. proposed a teaching-learning-based optimization (TLBO) algorithm to solve large-scale optimization problems. The simulation results of the standard test function showed that the TLBO algorithm effectively solves complex optimization problems [3]. To solve a complex constrained optimization problem, Sayed E et al. proposed a decomposition evolutionary algorithm [4]. Mohapatra et al. proposed a competitive swarm optimizer algorithm [5]. To overcome the shortcoming of particle swarm optimization (PSO) falling easily into a local optimum, an improved quantum PSO algorithm with the cultural gene algorithm and memory mechanism was proposed to solve continuous nonlinear problems [6,7]. Ali et al. presented a multi-population differential evolution global optimization algorithm [1]. Ant colony optimization, proposed by Ismkhan, has been applied to solving complex problems [8]. Using the symbiotic organism search algorithm for fractional fuzzy controllers [9] and so on.

    In an industrial control system, the proportional integral derivative (PID) controller has been widely applied. It accounts for more than 90% of the actual control system [10]. The PID parameter tuning problem proposed by Ziegler and Nichols has caused extensive concern. However, the traditional PID parameter tuning method has the following problems: The control performance index is not ideal and, typically, the method has a large overshoot and long adjustment time. The control effect of an intelligent optimization algorithm in PID parameter tuning is better than that of the traditional tuning method, and it can avoid some shortcomings of traditional methods [11]. Jiang et al. proposed a PID tuning premature with a genetic algorithm (GA) to enhance the search and convergence speed, but there were problems of premature convergence and parameter dependence [12]. Yu et al. proposed seeker search algorithm optimization PID controller parameters; improved the control precision of the system; accelerated the response speed and robustness of the system; and optimized the optimal parameters for the control system PID, but the optimization formula complex, need to set more parameters [13]. P. B. de Moura Oliveira, et al. designed Posicast PID control systems using a gravitational search algorithm (GSA) [14]. Guo-qiang Zeng et al. designed multivariable PID controllers using real-coded population-based extremal optimization [15]. A. Belkadi et al. proposed a PSO-based approach on the robust PID adaptive controller for exoskeletons [16]. M. Gheisarnejad proposed an effective hybrid harmony search (HS) and cuckoo optimization algorithm-based fuzzy PID controller for load frequency control [17]. Amal Moharam, et al. designed an optimal PID controller using hybrid differential evolution and PSO with an aging leader and challengers [18].

    The spotted hyena optimizer (SHO) [19] is a novel intelligence algorithm proposed by Dhiman and Kumar in 2017. It was inspired by the social and collaborative behavior of spotted hyenas, which exist in nature. Spotted hyenas typically perform four processes: Search, encirclement, hunt, and attack prey. The SHO has the characteristics of simple, easy to implement programming and adjust the parameters set less features. Since the SHO was proposed, there have been various improved versions of the SHO algorithm. For example, N. Panda, et al. used an improved SHO (ISHO) with space transformational search to train a pi-sigma higher-order neural network [20]. H. Moayedi et al. proposed using the SHO and ant lion optimization to predict the shear strength of soil [21]. Q. Luo, et al. proposed using the SHO with lateral inhibition for image matching [22]. G. Dhiman et al. proposed a multi-objective optimization algorithm for engineering problems [23] and used the SHO to solve the nonlinear economic load power dispatch problem [24]. Xu Y, et al. proposed an enhanced moth-flame optimizer with a mutation strategy for global optimization [25].

    In function optimization and engineering optimization, it has been proved that the performance of the SHO is superior to that of the grey wolf optimizer (GWO), binary GWO [26], PSO, moth-flame optimization (MFO), multi-verse optimizer, sine cosine algorithm (SCA), GSA, GA, HS, Harris hawks optimization [27], bacterial foraging optimization [28], and flower pollination algorithm [29] in terms of precision and the convergence speed [22]. In this paper, an ISHO algorithm is applied to solve PID parameter problems in an automatic voltage regulator (AVR).

    The remainder of the paper is organized as follows: The basic SHO algorithm is presented in Section 2. In Section 3, the ISHO is introduced. In Section 4, the ISHO is proposed to optimize PID parameters and compared with well-known algorithms. Finally, conclusions are provided in Section 5.

    The social relationships and habits of animals are the source of inspiration for our work. This social behavior is also present in the spotted hyena, whose scientific name is Crocuta. According to llany et al. [30], spotted hyenas typically live in groups, with as many as 100 group members, and they have mutual trust and interdependence. The communication between spotted hyenas is typically posed, and given a special signal, they track prey using sight, hearing, and smell. There are four main steps for a spotted hyena to attack prey: Search for prey, encircle prey, hunt prey, and attack prey. To generate a mathematical model for encircling, the equations are as follows [31]:

    D=|BXp(t)X(t)| (2.1)
    X(t+1)=Xp(t)ED, (2.2)

    where t is the current iteration, B and E are the coefficient vectors, Xp is the position vector of the prey, X is the position vector, D is the distance between the prey and spotted hyena, and |||| represents the absolute value.

    The coefficient vectors B and E are calculated as follows:

    B=2rd1 (2.3)
    E=2hrd2h (2.4)
    h=5(t(5/T)), (2.5)

    where t=1,2,3,,T to balance exploration and exploitation, h linearly decreases from 5 to 0, and rd1 and rd2 are random vectors in [0, 1].

    To define the spotted hyena behavior mathematically, the best search agent represents the location of the prey. The other search agents move toward the best search agent and save the best solutions obtained thus far to update their positions. The mathematical model can be formulated as follows:

    Dh=|BXhXk| (2.6)
    Xk=XhEDh (2.7)
    Ch=Xk+Xk+1++Xk+N (2.8)
    N=countnos(Xh,Xh+1,Xh+2,,(Xh+M)), (2.9)

    where Xh is the position of the best spotted hyena, Xk is the position of other spotted hyena, N is the number of spotted hyenas, M is a random vector in [0.5, 1], nos is the number of solutions, countnos is the count of all candidate solutions, and Ch is a cluster of N optimal solutions.

    For spotted hyenas in the attack prey stage, to determine the optimal solution, it is necessary to continuously reduce the value of h, where h is the step size that the spotted hyena takes to attack prey, and it is clear that, when looking for prey, the spotted hyena continues to increase the number of steps gain steps. The formulation for attacking prey is

    X(t+1)=ChN, (2.10)

    where X(t+1) is the position of the current solution and t is the number of iterations. The SHO allows its agents to update their positions in the direction of the prey.

    Algorithm 1 Pseudocode of the SHO
    1. Initialize the spotted hyena population Xi (i=1,2,,n)
    2. Initialize h, B, E, and N
    3. Calculate the fitness of each search agent
    4. Xh= best search agent
    5. Ch= group or cluster of all far optimal solutions
    6. while (t < max number of iterations)
    7.      for each search agent do
    8.        Update the position of the current search agent by Eq (2.10)
    9.     end for
    10.       Update h, B, E and N
    11.       Check if any search agent goes beyond the search space and revamp it
    12      Calculate the fitness of each search agent
    13.       Update Ph if there is a better solution by Eqs (2.6) and (2.7)
    14.      t = t + 1
    15.  end while
    16. Return Xh

    The parameters B and E oblige the SHO algorithm to explore and exploit the search space. As B decreases, half the iterations are dedicated to exploration (when |E|>1) and the remainder are dedicated to exploitation (when |E|<1) [32]. From the above, the B vector contains random values in [0, 2]. This component provides random weights for prey to stochastically emphasize (B>1) or deemphasize (B<1) the effect of prey in defining the distance in Eq (2.3). This helps the random behavior of the SHO to increase during the course of optimization, and favors exploration and local optima avoidance. The pseudocode of the SHO algorithm is as above.

    Haupt et al. found that the initial population affects the algorithm’s accuracy and convergence speed [32]. The better than initial population can lay the foundation for the global search of the SHO algorithm [21]. However, without any prior knowledge of the global optimal solution of the problem, the SHO algorithm typically adopts a random method when generating the initial search agent, which thus affects the search efficiency. The opposition learning strategy is a new technology that has emerged in the field of intelligent computing. So far, the opposite learning strategy has been successfully applied to swarm intelligent algorithms, such as PSO, HS, and DE algorithms [33,34]. In this paper, the opposite learning strategy is embedded into the SHO for initialization.

    Algorithm 2 Initialization method based on opposite learning
    Set the population size to N
    1.     for i=1 to N do
    2. for j=1 to d do
    3.       Xji=lji+rand(0,1)(ujilji)
    4.          end for
    5.      end for
    6.       for i=1 to N do
    7.          for j=1 to d do
    8.           Xji=lji+ujiXji
    9.          end for
    10.       end for
    11. Output {X(N)X(N)}, where N denotes the individuals with the best fitness selected as the initial population.

    Definition: Opposition-based [35]. SupposeXexists in [l,u]. The opposite point is X=l+uX. Let X=(X1,X2,,Xd) be a point in d-dimensional space, where X1,X2,,XdR and Xi[li,ui] i{1,2,,d}. The opposition-based X=(X1,X2,,Xd) is completely defined by its components:

    Xi=li+uiXi. (3.1)

    According to the above definition, the specific steps for using the opposite learning strategy to generate the initial population are as above.

    Similar to other group intelligent optimization algorithms based on population iteration, it is crucial for the SHO to coordinate its exploration and exploitation capabilities. During exploration, groups need to detect a wider search area and avoid the SHO algorithm becoming stuck in a local optimum. The exploitation capacity mainly uses the group’s existing information to search some local solution’s areas of the solution. The convergence rate of the SHO algorithm has a decisive influence. Clearly, robustness and fast convergence are achieved only when the SHO algorithm improves the coordination of the exploration and exploitation capabilities.

    According to [21], the SHO algorithm’s exploration and exploitation abilities depend on the change of the convergence factor h. The larger the convergence factor h, the better the global search ability and the more likely the SHO algorithm avoids falling into a local optimum. The smaller the convergence factor h, the stronger the local search ability, which speeds up the convergence of the SHO algorithm. However, in the basic SHO algorithm, the convergence factor h decreases linearly from 5 to 0 as the number of iterations increases. The linear decreasing strategy of the convergence factor h has a good global search ability in the early stages of the algorithm, but the convergence speed is slow. In the latter part of the algorithm to speed up the convergence rate, but easy to fall into a local optimum, particularly in multimode functions problems. Therefore, in the evolutionary search process, the convergence factor h with the number of iterations linearly decreasing strategy cannot fully reflect the actual optimization of the search process in the SHO algorithm [36]. In fact, the SHO is expected to have a strong global search ability in the pre-search period while maintaining a fast convergence rate. Additionally, Enns, et al. and Zeng, et al. found that performance improved if the control parameter was chosen as a nonlinearly decreasing quantity rather than using a linearly decreasing strategy [30,32]. Thus, the control parameter h is modified as follows:

    h=hinitial(hinitialhfinal)×(Max_iterationtMax_iteration)u, (3.2)

    where t is the current iteration, Max_iteration is the maximum number of iterations, u is the nonlinear modulation index, and hinitial and hfinal are the initial value and final value of control parameter h, respectively. According to [32], when hinitial is set to 5, hfinal becomes 0.

    Figure 1 shows typical control parameter h variations with iterations for different values of u. We conducted several SHO experiments with a nonlinear modulation index u in the interval (0, 2.0). On average, the results are better than those of existing algorithms: The larger the value of u (u > 2.0), the greater the failed convergence rate.

    Figure 1.  Control parameter (h) with iterations for different values of the nonlinear modulation index (u).

    Similar to other population-based intelligent optimization algorithms, in the late iteration of the SHO, all the spotted hyenas move closer to the optimal individual region, which results in a reduction of population diversity. In this case, if the current optimal individual is the local optimal, then the SHO algorithm falls into a local optimum. This is also an inherent characteristic of other group intelligent optimization algorithms. To reduce the probability of premature convergence for the SHO algorithm, in this paper, a diversity mutation operation is performed on the current optimal spotted hyena individuals. The steps are as follows:

    Assume that an individual Xi=(xi1,xi2,,xid) of the spotted hyena species selects one element xk(k=1,2,,d) randomly from the individual with a probability of and randomly generates a real number in the range [li,ui] instead of the element from the individual Xi, thus producing a new individual. X_i^' = (x_{i1}^',x_{i2}^', \cdots ,x_{id}^' The variation mutation operation is

    Xi={li+λ(uili)i=kXiotherwise, (3.3)

    where li and u are the upper and lower bounds of the variable x, respectively, and λ[0,1] is a random. The ISHO optimizer steps are presented in Algorithm 3.

    Algorithm 3 Pseudocode of the ISHO
    1. Set the population size N using the opposite learning strategy described in Algorithm 2 to generate an initialized spotted hyena population Xi(i=1,2,,n)
    2. Initialize the parameters h, B, E, and N
    3. Calculate the fitness of each agent
    4. Ph = best search agent
    5. Ch = group or cluster of all far optimal solutions
    6. while (t<tmax) do
    7.    for i =1 to N do
    8.     Using Eq (3.2), calculate the value of the convergence factor h
    Update the other parameters B and E using Eqs (2.3) and (2.4), respectively
    9.      if (|E|1) do
    10.        According to Eq (2.7), update the spotted hyena individual’s position
    11.     end if
    12.      if (|E|1) do
    13.       According to Eq (2.10), update the spotted hyena individual’s position
    14.      end if
    15.   end for
    16.    Perform diversity mutation on the current spotted hyena individual using Eq (3.3)
    17.   Calculate the fitness of each agent
    18.    Update Xi if there is a better solution
    19.    t = t+1
    20. end while
    21.end

    Providing constancy and stability at rated voltage levels in electricity, the network is also one of the main problems in power system control. If the rated voltage level deviates from this value, then the performance degrades and the life expectancy reduces. Another important reason for this control is true line loss, which depends on the real and reactive power flow. The reactive power flow is largely dependent on the terminal voltage of the power system. However, it is necessary to reduce the loss caused by the solid line by controlling the rated voltage level. To solve these control problems, an AVR system is applied to power generation units [36]. The role of the AVR is to maintain the terminal voltage of the synchronous alternator at the rated voltage value.

    Using the PID controller to improve the dynamic response while reducing or eliminating the steady-state error, the derivative controller adds a finite zero to the open-loop plant, which enables the improvement of the transient response. The PID controller transfer function is

    C(s)=Kp+KiS+KdS. (4.1)

    A simple AVR system has four parts: Amplifier, exciter, generator, and sensor. The mathematical transfer function of the above four components is considered as linear and time constant. To analyze the dynamic performance of an AVR, the transfer functions of these components are in [37,38].

    The amplifier model is represented by a gain KA and time constant τA. The transfer function is given by

    Vr(s)Ve(s)=KA1+τAs, (4.2)

    where the range of KA is [10,400] and the amplifier time constant ranges from 0.02–0.1 s.

    The transfer function of an exciter is modeled by a gain KE and time constant τE, and given by

    Vf(s)Vr(s)=KE1+τEs, (4.3)

    where KE is typically in the range [10,400] and the time constant τE is in the range 0.5–1.0 s.

    The generator model is represented by a gain KG and time constant τG. The transfer function is given by

    Vt(s)Vf(s)=KG1+τGs, (4.4)

    where KR is in the range [0.7, 1.0] and τR is in the range 1.0–2.0 s. The generator gain KR and time constant τR are load dependent.

    The sensor is modeled by a gain KR and time constant τR. The transfer function is given by

    Vs(s)Vt(s)=KR1+τRs, (4.5)

    where KR is in the range [10,400] and τR is in the range 0.001–0.06 s.

    The complete transfer function model of the AVR system is given in Figure 2. In the work of Gozde and Taplanmacioglu [37], the parameters of the AVR system were KA=10.0, τA=0.1, KE=1.0, τE=0.4, KG=1.0, τG=1.0, KR=1.0, and τR=0.01.

    Figure 2.  Block diagram of an AVR system with an ISHO-PID controller.

    The transformer function of the AVR system with the above parameters is

    ΔVt(s)ΔVref(s)=0.1s+100.0004s4+0.045s3+0.555s2+1.51s2+1.51s+11. (4.6)

    To improve the dynamic response of the AVR system and maintain the terminal voltage at 1.0 pu, a PID controller is included, as shown in Figure 2.

    With the PID controller, the transfer function of the AVR system of Figure 2 becomes

    ΔVt(s)ΔVref(s)=0.1Kds2+(0.1Kp+10Kd)s2+(0.1Ki+10Kp)s+10Ki0.0004s5+0.045s4+0.555s3+(1.51+10Kd)s2+(1+10Kp)s+10Ki. (4.7)

    An AVR system with a PID controller tuned by the ISHO algorithm is shown in Figure 2. The gains of the PID controller are regulated by the ISHO algorithm. If the proportional gain is too high, then the system becomes unstable and the proportional gain becomes too low, which results in a larger error and lower sensitivity. For an AVR system, the ranges commonly used in the literature are [0.0, 1.5] and [0.2, 2.0] in [38,39]. To increase the search space for better optimization gains, the resulting lower and upper bounds are chosen to be 0.01 and 2, respectively.

    To improve the performance control, the optimal PID parameters use the ISHO. The maximum overshoot, rise time, steady state error as a typical time domain analysis method index, and integral time multiplied by the absolute error (ITAE) are considered as control performance indicators in the design. The ITAE is the objective function with time value:

    ITAE=t0t|VtVref|dt. (4.8)

    An ISHO-PID controller is presented for searching the optimal or near optimal controller parameters Kp, Ki, and Kd using the ISHO algorithm. Each individual K contains three members: Kp, Ki, and Kd. The searching procedures of the proposed ISHO-PID controller are presented in Algorithm 4. The three controller parameters set in the algorithm are shown in Table 1.

    Algorithm 4 ISHO solution for the PID control system algorithm
    1. Determine the parameters of the PID, proportional gain Kp, integral gain Ki, and differential gain Kd.
    2. Randomly initialize population X of N individuals (solutions), iter=0, and set the parameters of the ISHO, h, B, and E, and maximum number of iterations itermax.
    3. Set the lower and upper bounds of the three controller parameters for each individual, apply the PID controller with gains specified by that individual to the PID controller, run all the system steps, and calculate the fitness value of each individual using Eq (4.8).
    4. Determine the optimal spotted hyena individual.
       while stop if the termination criterion is not satisfied do
    5. for each individual xX do
    5.1. Propagate each spotted hyena individual x to a new individual x using Eq (2.7).
    5.2. if f(x)>f(x) then
    6. Update the position of all individuals using Eq (2.10).
    7. Calculate each individual fitness function using Eq (4.8).
    8. iter=iter+1.
    9. if iter<itermax, then go to step 5.
    10. Output the best solution and the optimal controller parameters.

    Table 1.  Three controller parameters set in the algorithm.
    Controller parameters Min. value Max. value
    Kp 0 1.5
    Ki 0 1.0
    Kd 0 1.0

     | Show Table
    DownLoad: CSV

    The ISHO algorithm was applied to optimize the PID controller for an AVR system and determine a set of optimal gains that minimize the value of the objective function. To prove the superiority of the ISHO algorithm, we compared the ISHO with other algorithms that contain the SHO [19], GWO [40], PSOGSA [41], FPA [42], and SCA [43]. The results showed that the ISHO performed better than the other algorithms. All the parameters set in the algorithms are given in Table 2.

    Table 2.  Parameters set in the six algorithms.
    Algorithms Parameter values
    SCA r2[0,2π], a=2, r4[0,1]. The population size is 50.
    FPA The proximity probability p = 0.8, the population size is 50.
    PSOGSA c1=c2=2, ωmax=0.3,ωmin=0.1, G0=1, α=5 the population size is 50.
    GWO Components α[0,2] over the course of iterations. The population size is 50.
    SHO The parameter is h[0,5] over the course of iterations, the population size is 50.
    ISHO hinitial=5, hfinal=0, u[0,2.0]. The population size is 50.

     | Show Table
    DownLoad: CSV

    Setting the six algorithms’ parameters, we obtained the best parameter values for each algorithm to optimize the PID parameters using the population size of 50, 20 iterations, and 20 runs independently. "Best" is the optimal fitness value, "Worst" is the worst fitness value, "Mean" is the mean fitness value, and "Std." is the standard deviation. Table 3 shows that the best fitness value of ISHO was significantly better than those of the other algorithms, and the standard error of the ISHO was the smallest. It is thus proved that the ISHO is better than the standard SHO and other algorithms (SCA, FPA, PSOGSA, and GWO) in terms of obtaining the optimal PID parameters.

    Table 3.  Optimum parameters of the PID controller.
    Algorithms SCA FPA PSOGSA GWO SHO ISHO
    Kp 1.4155 1.4230 1.4012 1.3168 1.3079 1.0263
    Ki 0.9721 0.9974 0.9602 0.9051 0.9234 0.7115
    Kd 0.4546 0.4302 0.4601 0.4219 0.3985 0.3154
    Best 0.0328 0.0329 0.0328 0.0329 0.0334 0.0327
    Worst 0.0382 0.0380 0.0333 0.0369 0.0370 0.0331
    Ave 0.0344 0.0341 0.0330 0.0344 0.0347 0.0328
    Std 0.0015 0.0012 1.5427 × 10−4 0.0012 9.8111 × 10−4 8.4078 × 10−5

     | Show Table
    DownLoad: CSV

    In Table 2, we label the optimal PID parameters with the optimal fitness values, and the minimum standard error is indicated by the black bold line. The table shows that, although the PSOGSA algorithm is also a hybrid of the PSO and GSA algorithms, the effect of optimally searching for PID parameters was still not as good as that of the ISHO algorithm.

    Figure 3.  AVR system terminal voltage curves for different algorithms.
    Table 4.  Results of the transient response for different algorithms.
    Algorithms SCA FPA PSOGSA GWO SHO ISHO
    Maximum overshoots 1.1751 1.1601 1.1383 1.1767 1.2017 1.120
    Peak time (s) 0.3821 0.3923 0.3883 0.3924 0.3951 0.4160
    Settling time (s) 0.6930 1.003 0.9723 0.9494 0.9288 0.8481
    Rise time (s) 0.2382 0.2207 0.2883 0.2274 0.2010 0.3021

     | Show Table
    DownLoad: CSV

    The transient and steady-state behavior of the system can be analyzed from the transient analysis of the ISHO optimized PID controller in the AVR system, as shown in Figure 4. For comparison, responses for the SCA, FPA, PSOGSA, GWO, and SHO algorithms are shown in Table 8. The figure shows that the maximum overshoot for the ISHO algorithm is 5% less than that of the SCA algorithm, 3.5% less than that of the FPA algorithm, 4.82% less than that of the GWO algorithm, and 6.8% less than that of the SHO algorithm. The peak time for the ISHO algorithm is more than that of the SHO, GWO, PSOGSA, FPA, and SCA algorithms. The maximum overshoot and settling time for the ISHO algorithm are better than those of the SHO, GWO, PSOGSA, FPA, and SCA algorithms, and are major factors for comparing the stability analysis of systems.

    Figure 4.  Evolution curves of the fitness values.

    The convergence characteristics are shown in Figure 4. The figure shows that the ISHO algorithm’s fitness value decreases the fastest compared with those of the other algorithms. This shows that the ISHO algorithm has a strong global search capability and higher precision. The ISHO algorithm is considered as an optimization of the PID controller parameters in the AVR system, which has promising potential applications.

    The SHO was inspired by the social behavior of a spotted hyena swarm. Its mathematical model is relatively simple, but the control parameter directly affects the balance between the global search ability and local search ability in the SHO algorithm. Based on the analysis of the above characteristics of the SHO, in this paper, a nonlinear adjustment strategy was adopted for the control parameters and the mutation strategy was used to deal with the update of the intelligent individual position. The performance of the improved algorithm was verified by a simulation. The ISHO quickly approached the theoretical value and significantly improved the convergence speed and optimization efficiency. The ISHO algorithm was used to determine the parameters of the PID controller for an AVR system. It is clear from the results that the proposed ISHO algorithm avoided the shortcoming of the premature convergence of the SHO, GWO, PSOGSA, FPA, and SCA algorithms and obtained global solutions with better computation efficiency.

    This work was supported by the Project of China University of Political Science and Law Research Innovation under Grant No. 10818441 and the Young Scholar Fund of China University of Political Science and Law under Grant No. 10819144. We thank Maxine Garcia, PhD, from Liwen Bianji, Edanz Group China (www.liwenbianji.cn/ac) for editing the English text of a draft of this manuscript.

    The authors declare no conflict of interest.



    [1] W. Cao, X. Wang, Z. Ming, et al., A review on neural networks with random weights, Neurocomputing, (2017), S0925231217314613.
    [2] G. Camps-Valls, D. Tuia, L. Bruzzone, et al., Advances in hyperspectral image classification: earth monitoring with statistical learning methods, IEEE Signal Proc. Mag., 31 (2013), 45–54.
    [3] L. Wang, Y. Zeng and T. Chen, Back propagation neural network with adaptive differential evolution algorithm for time series forecasting, Expert Syst. Appl., 42 (2015), 855–863.
    [4] E. Maggiori, Y. Tarabalka, G. Charpiat, et al., Convolutional neural networks for large-scale remote sensing image classification, IEEE T. Geosci. Remote, 55 (2016), 645–657.
    [5] G. B. Huang, Q. Y. Zhu and C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), 489–501.
    [6] J. Zhang, Y. F. Lu, B. Q. Zhang, et al., Device-free localization using empirical wavelet transform-based extreme learning machine, Proceedings of the 30th Chinese Control and Decision Conference, (2018), 2585–2590.
    [7] Y. J. Li, S. Zhang, Y. X. Yin, et al., A soft sensing scheme of gas utilization prediction for blast furnace via improved extreme learning machine, Neural Process. Lett. (2018), 10.1007/s11063-018-9888-3.
    [8] J. Zhang, Y. F. Xu, J. Q. Xue, et al., Real-time prediction of solar radiation based on online sequential extreme learning machine, Proceedings of the 13th IEEE Conference on Industrial Electronics and Applications, (2018), 53–57.
    [9] R. Z. Song, W. D. Xiao, Q. L. Wei, et al., Neural-network-based approach to finite-time optimal control for a class of unknown nonlinear systems, Soft Comput., 18 (2014), 1645–1653.
    [10] J. Zhang, W. D. Xiao, Y. J. Li, et al., Multilayer probability extreme learning machine for device-free localization. Neurocomputing, (2019), 10.1016/j.neucom.2018.11.106.
    [11] Y. Park, and H. S. Yang, Convolutional neural network based on an extreme learning machine for image classification, Neurocomputing, 339 (2019), 66–76.
    [12] G. B. Huang, H. Zhou, X. Ding, et al., Extreme learning machine for regression and multiclass classification, IEEE T. Syst. Man Cy. B., 42 (2012), 513–529.
    [13] F. Han, H. F. Yao and Q. H. Ling, An improved evolutionary extreme learning machine based on particle swarm optimization, Neurocomputing, 116 (2013), 87–93.
    [14] A. Rashno, B. Nazari, S. Sadri, et al., Effective pixel classification of mars images based on ant colony optimization feature selection and extreme learning machine, Neurocomputing, 226 (2017), 66–79.
    [15] G. Li, P. Niu, Y. Ma, et al., Tuning extreme learning machine by an improved artificial bee colony to model and optimize the boiler efficiency, Knowl-Based Syst., 67 (2014), 278–289.
    [16] İ. B. Ş, and S. Fang, An electromagnetism-like mechanism for global optimization, J. Global Optim., 25 (2003), 263–282.
    [17] C. J. Zhang, X. Y. Li, L. Gao, et al., An improved electromagnetism-like mechanism algorithm for constrained optimization, Expert Syst. Appl., 40 (2013), 5621–5634.
    [18] C. T. Tseng, C. H. Lee, Y. S. P. Chiu, et al., A discrete electromagnetism-like mechanism for parallel machine scheduling under a grade of service provision, Int. J. Prod. Res., 55 (2017), 3149–3163.
    [19] X. Y. Li, L. Gao, Q. K. Pan, et al., An effective hybrid genetic algorithm and variable neighborhood search for integrated process planning and scheduling in a packaging machine workshop, IEEE T. Syst. Man Cy. Syst., (2018), 10.1109/TSMC.2018.2881686.
    [20] X. Y. Li, C. Lu, L. Gao, et al., An Effective Multi-Objective Algorithm for Energy Efficient Scheduling in a Real-Life Welding Shop, IEEE T. Ind. Inform., 14 (2018), 5400–5409.
    [21] X. Y. Li, S. Q. Xiao, C. Y. Wang, et al., Mathematical Modeling and a Discrete Artificial Bee Colony Algorithm for the Welding Shop Scheduling Problem, Memetic Comp., (2019), 10.1007/s12293-019-00283-4.
    [22] Q. Wu, L. Gao, X. Y. Li, et al., Applying an electromagnetism-like mechanism algorithm on parameter optimisation of a multi-pass milling process, Int. J. Prod. Res., 51 (2013), 1777–1788.
    [23] K. J. Wang, A. M. Adrian, K. H. Chen, et al., An improved electromagnetism-like mechanism algorithm and its application to the prediction of diabetes mellitus, J. Biomed. Inform., 54 (2015), 220–229.
    [24] S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems, Neural Comput. Appl., 27 (2016), 1053–1073.
    [25] G. Huang, G. B. Huang, S. Song, et al., Trends in extreme learning machines: a review, Neural Networks, 61 (2015), 32–48.
    [26] P. L. Bartlett, The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network, IEEE T. Inform. Theory, 44 (2002), 525–536.
    [27] Q. Y. Zhu, A. K. Qin, P. N. Suganthan, et al., Evolutionary extreme learning machine, Pattern Recogn., 38 (2005), 1759–1763.
    [28] D. Dua, and E. K. Taniskidou, UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science, 2017. Available from: http://archive.ics.uci.edu/ml.
    [29] Y. Wang, A. Wang, Q. Ai, et al., A novel artificial bee colony optimization strategy-based extreme learning machine algorithm, Prog. Artif. Intell., 6 (2016), 1–12.
  • This article has been cited by:

    1. Amirreza Naderipour, Zulkurnain Abdul-Malek, Mohammad Hajivand, Zahra Mirzaei Seifabad, Mohammad Ali Farsi, Saber Arabi Nowdeh, Iraj Faraji Davoudkhani, Spotted hyena optimizer algorithm for capacitor allocation in radial distribution system with distributed generation and microgrid operation considering different load types, 2021, 11, 2045-2322, 10.1038/s41598-021-82440-9
    2. Nibedan Panda, Santosh Kumar Majhi, Rosy Pradhan, A Hybrid Approach of Spotted Hyena Optimization Integrated with Quadratic Approximation for Training Wavelet Neural Network, 2022, 47, 2193-567X, 10347, 10.1007/s13369-022-06564-4
    3. Davut Izci, Serdar Ekinci, Hatice Lale Zeynelgil, Controlling an automatic voltage regulator using a novel Harris hawks and simulated annealing optimization technique, 2023, 2578-0727, 10.1002/adc2.121
    4. Nikhil Paliwal, Laxmi Srivastava, Manjaree Pandit, Rao algorithm based optimal Multi‐term FOPID controller for automatic voltage regulator system , 2022, 43, 0143-2087, 1707, 10.1002/oca.2926
    5. Serdar Ekinci, Davut Izci, Erdal Eker, Laith Abualigah, An effective control design approach based on novel enhanced aquila optimizer for automatic voltage regulator, 2023, 56, 0269-2821, 1731, 10.1007/s10462-022-10216-2
    6. Abdelhakim Idir, Laurent Canale, Yassine Bensafia, Khatir Khettab, Design and Robust Performance Analysis of Low-Order Approximation of Fractional PID Controller Based on an IABC Algorithm for an Automatic Voltage Regulator System, 2022, 15, 1996-1073, 8973, 10.3390/en15238973
    7. Shafih Ghafori, Farhad Soleimanian Gharehchopogh, Advances in Spotted Hyena Optimizer: A Comprehensive Survey, 2022, 29, 1134-3060, 1569, 10.1007/s11831-021-09624-4
    8. Yi Zhang, XianBo Sun, Li Zhu, ShengXin Yang, YueFei Sun, Qingling Wang, Research on Three-Phase Unbalanced Commutation Strategy Based on the Spotted Hyena Optimizer Algorithm, 2022, 2022, 1099-0526, 1, 10.1155/2022/2092421
    9. Chunhui Mo, Xiaofeng Wang, Lin Zhang, 2022, Chapter 10, 978-981-19-8151-7, 142, 10.1007/978-981-19-8152-4_10
    10. Davut Izci, Serdar Ekinci, H. Lale Zeynelgil, John Hedley, Performance evaluation of a novel improved slime mould algorithm for direct current motor and automatic voltage regulator systems, 2022, 44, 0142-3312, 435, 10.1177/01423312211037967
    11. Davut Izci, Serdar Ekinci, Seyedali Mirjalili, Optimal PID plus second-order derivative controller design for AVR system using a modified Runge Kutta optimizer and Bode’s ideal reference model, 2022, 2195-268X, 10.1007/s40435-022-01046-9
    12. Nikhil Paliwal, Laxmi Srivastava, Manjaree Pandit, Equilibrium optimizer tuned novel FOPID‐DN controller for automatic voltage regulator system , 2021, 31, 2050-7038, 10.1002/2050-7038.12930
    13. Muhammad Imran Nadeem, Kanwal Ahmed, Dun Li, Zhiyun Zheng, Hafsa Naheed, Abdullah Y. Muaad, Abdulrahman Alqarafi, Hala Abdel Hameed, SHO-CNN: A Metaheuristic Optimization of a Convolutional Neural Network for Multi-Label News Classification, 2022, 12, 2079-9292, 113, 10.3390/electronics12010113
    14. Özay Can, Cenk Andiç, Serdar Ekinci, Davut Izci, Enhancing transient response performance of automatic voltage regulator system by using a novel control design strategy, 2023, 0948-7921, 10.1007/s00202-023-01777-8
    15. Sudhakar Babu Thanikanti, T. Yuvaraj, R. Hemalatha, Belqasem Aljafari, Nnamdi I. Nwulu, Optimizing Radial Distribution System With Distributed Generation and EV Charging: A Spotted Hyena Approach, 2024, 12, 2169-3536, 113422, 10.1109/ACCESS.2024.3438456
    16. Bora Çavdar, Erdinç Şahin, Erhan Sesli, On the assessment of meta-heuristic algorithms for automatic voltage regulator system controller design: a standardization process, 2024, 106, 0948-7921, 5801, 10.1007/s00202-024-02314-x
    17. Bora Çavdar, Erdinç Şahin, Ömür Akyazı, Fatih Mehmet Nuroğlu, A novel optimal PI\uplambda1I\uplambda2D\upmu1D\upmu2
    controller using mayfly optimization algorithm for automatic voltage regulator system, 2023, 35, 0941-0643, 19899, 10.1007/s00521-023-08834-0
    18. Rohit Salgotra, Pankaj Sharma, Saravanakumar Raju, Amir H. gandomi, A Contemporary Systematic Review on Meta-heuristic Optimization Algorithms with Their MATLAB and Python Code Reference, 2024, 31, 1134-3060, 1749, 10.1007/s11831-023-10030-1
    19. Ömer Öztürk, Bora Çavdar, Otomatik Gerilim Regülatörü Sistemi Denetleyici Tasarımı için Meta-Sezgisel Algoritmaların Performansı, 2024, 14, 2564-7377, 2258, 10.31466/kfbd.1558173
    20. Tapas Si, Péricles B. C. Miranda, Utpal Nandi, Nanda Dulal Jana, Ujjwal Maulik, Saurav Mallik, Mohd Asif Shah, QSHO: Quantum spotted hyena optimizer for global optimization, 2025, 58, 1573-7462, 10.1007/s10462-024-11072-y
    21. Fei Dai, Tianli Ma, Song Gao, Optimal Design of a Fractional Order PIDD2 Controller for an AVR System Using Hybrid Black-Winged Kite Algorithm, 2025, 14, 2079-9292, 2315, 10.3390/electronics14122315
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4618) PDF downloads(680) Cited by(0)

Figures and Tables

Figures(2)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog