Processing math: 100%

Review of stability and stabilization for impulsive delayed systems

  • Received: 03 August 2018 Revised: 19 August 2018 Published: 01 December 2018
  • MSC : Primary: 58F15, 58F17; Secondary: 53C35

  • This paper reviews some recent works on impulsive delayed systems (IDSs). The prime focus is the fundamental results and recent progress in theory and applications. After reviewing the relative literatures, this paper provides a comprehensive and intuitive overview of IDSs. Five aspects of IDSs are surveyed including basic theory, stability analysis, impulsive control, impulsive perturbation, and delayed impulses. Then the research prospect is given, which provides a reference for further study of IDSs theory.

    Citation: Xueyan Yang, Xiaodi Li, Qiang Xi, Peiyong Duan. Review of stability and stabilization for impulsive delayed systems[J]. Mathematical Biosciences and Engineering, 2018, 15(6): 1495-1515. doi: 10.3934/mbe.2018069

    Related Papers:

    [1] Qichao Ying, Jingzhi Lin, Zhenxing Qian, Haisheng Xu, Xinpeng Zhang . Robust digital watermarking for color images in combined DFT and DT-CWT domains. Mathematical Biosciences and Engineering, 2019, 16(5): 4788-4801. doi: 10.3934/mbe.2019241
    [2] Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103
    [3] Jiale Lu, Jianjun Chen, Taihua Xu, Jingjing Song, Xibei Yang . Element detection and segmentation of mathematical function graphs based on improved Mask R-CNN. Mathematical Biosciences and Engineering, 2023, 20(7): 12772-12801. doi: 10.3934/mbe.2023570
    [4] Yanyan Zhang, Jingjing Sun . An improved BM3D algorithm based on anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2020, 17(5): 4970-4989. doi: 10.3934/mbe.2020269
    [5] Xian Fu, Xiao Yang, Ningning Zhang, RuoGu Zhang, Zhuzhu Zhang, Aoqun Jin, Ruiwen Ye, Huiling Zhang . Bearing surface defect detection based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2023, 20(7): 12341-12359. doi: 10.3934/mbe.2023549
    [6] Mingju Chen, Hongyang Li, Hongming Peng, Xingzhong Xiong, Ning Long . HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement. Mathematical Biosciences and Engineering, 2024, 21(2): 1917-1937. doi: 10.3934/mbe.2024085
    [7] Han Zhu, Xiaohai He, Meiling Wang, Mozhi Zhang, Linbo Qing . Medical visual question answering via corresponding feature fusion combined with semantic attention. Mathematical Biosciences and Engineering, 2022, 19(10): 10192-10212. doi: 10.3934/mbe.2022478
    [8] Hong-an Li, Min Zhang, Zhenhua Yu, Zhanli Li, Na Li . An improved pix2pix model based on Gabor filter for robust color image rendering. Mathematical Biosciences and Engineering, 2022, 19(1): 86-101. doi: 10.3934/mbe.2022004
    [9] Tao Wang, Min Qiu . A visual transformer-based smart textual extraction method for financial invoices. Mathematical Biosciences and Engineering, 2023, 20(10): 18630-18649. doi: 10.3934/mbe.2023826
    [10] Xianyi Chen, Anqi Qiu, Xingming Sun, Shuai Wang, Guo Wei . A high-capacity coverless image steganography method based on double-level index and block matching. Mathematical Biosciences and Engineering, 2019, 16(5): 4708-4722. doi: 10.3934/mbe.2019236
  • This paper reviews some recent works on impulsive delayed systems (IDSs). The prime focus is the fundamental results and recent progress in theory and applications. After reviewing the relative literatures, this paper provides a comprehensive and intuitive overview of IDSs. Five aspects of IDSs are surveyed including basic theory, stability analysis, impulsive control, impulsive perturbation, and delayed impulses. Then the research prospect is given, which provides a reference for further study of IDSs theory.


    Solar power integration to the power grid is significantly increasing in many countries [1]. Strong government policies towards greener power grid and other technological and economic factors have accelerated the growth of renewable energy generation. The growth of solar PV in 2017 was unprecedented; solar PV accounted for the 27% of overall renewables growth, followed by hydropower (22%) and bioenergy (12%) [2,3]. A large proportion of future electricity demand must be satisfied by renewable energy due to the global warming effects of conventional thermal generation. Costs of solar PV panels have reduced by more than 75% over the last decade [2]. This will continue to reduce further due to the strong focus on reducing the costs of manufacturing PV modules. Reductions in costs involve the use of thinner wafers, manufacturing wafers without ingot slicing, use of microinverters and use of multijunction solar cells [2]. Moreover, the conversion efficiency can also be increased up to 30–50%, especially by using multijunction solar cells [2].

    The integration of renewable energy mostly depends on the system characteristics and further integration of renewable energy would require various types of system enhancements with an additional cost. With a large proportion of solar power in the power grid, system reliability will be drastically reduced. Large battery storage systems [4] or pump storage power plants are required to enhance the system reliability for the addition of more solar power. Accurate and reliable solar power forecasts can be used to overcome the above issues. In addition, solar power forecasting helps system control engineers to efficiently dispatch hydro and thermal power plants, manage the spinning reserves and transmission line constraints.

    Solar power generation depends on seasonal changes, weather parameters, intra-hour variability and the technology used. Weather parameters such as direct irradiance, diffuse irradiance, wind speed, temperature, humidity and cloud cover can be used to model the solar power generation [5]. Thus, forecasted weather parameters can be used to obtain future solar power generation using the developed model. This is called point forecasting. Statistical methods such as regression models and Machine Learning (ML) methods can be used for modeling solar power generation.

    Modeling complex and nonlinear systems using ML is a popular research field. Support Vector Machine (SVM) is a supervised ML algorithm which is used for solving classification and regression problems. SVM regression models are used to model the relationship between weather parameters and solar power generation [6,7,8,9,10,11,12,13,14]. Yang et al. [8] proposed a hybrid model consists of classification, training and forecasting stages. A self-organizing map (SOM) and learning vector quantization (LVQ) networks are used to classify the historical solar power generation data into different weather conditions e.g., rainy, sunny, etc. Then, several support vector regression (SVR) models are trained for different diurnal variations and weather conditions. In the forecasting stage, the most suitable SVR model is selected by the fuzzy inference method. Li et al. [9] have proposed SVR and Neural Network (NN) models for solar power forecasting. Time, historical power information and meteorological forecasts are used as inputs to the models. In [6,12], authors have proposed weather classification approaches with SVR forecasting. Days are classified into four categories: clear sky, cloudy, foggy and rainy. Separate SVR models are used to forecast solar power generation in each category. Fentis et al. [11] have compared feed-forward NN and SVM forecasting results using Root Mean Square Error (RMSE), Mean Square Error (MSE) and Mean Absolute Error (MAE). SVM model outperforms the NN model by a slight margin. Bouzerdoum et al. [13] have proposed a hybrid model consists of the SVM model and a seasonal Auto-Regressive Integrated Moving Average (ARIMA) method. In [14], authors have proposed a hybrid forecasting model combining Wavelet Transform (WT), Particle Swarm Optimization (PSO) and SVM for day-ahead power generation forecasting of a solar PV system. The parameters of the SVM are optimized and fine-tuned by PSO to achieve a higher forecasting accuracy.

    In [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35], various types of NNs are proposed for solar power forecasting. Deep learning NNs such as Deep Belief Network (DBN), Long Short-Term Memory (LSTM) and Auto encoder-based LSTM are used to model solar power generation using weather parameters [22]. In [15,20], a modified Levenberg-Marquardt learning algorithm and Bayesian learning technique are used to determine the initial weights of multilayer perceptron NN instead of contrastive divergence method [36] used in [22]. Weather classification can be incorporated with NNs to improve the forecasting accuracy [19,23,24,31,33]. Separate NN models for different weather categories are trained to provide future solar power generation in each weather condition. Weather conditions can be categorized as cloudy, sunny, foggy, etc. [24]. A comparison of different learning rules and activation functions used in the multi-layer perceptron forecasting model is done in [21]. PSO techniques are used to update weights of feedforward NN in [26]. Two Numerical Weather Prediction (NWP) sources are used to improve the reliability and accuracy of a NN model in [27]. In [30], a one-hour-ahead solar power forecasting model is proposed using a combination of WT and artificial intelligence (AI) techniques. A novel PV power forecasting model based on NN is proposed in [32], considering aerosol index data as an additional input parameter. NNs consists of many hidden layers and hidden neurons (extreme learning machines) are used to model solar power generation in [35].

    Similarity search-based models are proposed in [37,38,39]. The solar power forecasting is done by searching historically similar weather conditions and mapping the respective power generation values into the given NWP data. Authors have proposed a novel and efficient approaches for time series database search, similarity evaluation and assembling of similar clusters to an overall forecast.

    Ensemble models integrate two or more ML techniques to provide a better forecast [5,40,41,42,43,44,45]. In [40], several forecasting models were implemented for predicting short term i.e., one hour ahead solar PV output. They include ARIMA, SVM, ANN, Adaptive Neuro-Fuzzy Inference System (ANFIS), and the combination models using Genetic Algorithm (GA). In [41], an ensemble model is proposed using auto-regressive, radial basis function (RBF) and forward NN models. The forward NN and RBF are trained using PSO to improve forecasting performance. In [5], an ensemble consists of seven ML techniques (decision tree, gradient boosting, K-nearest neighbors (KNN) with uniform weights, KNN with distance-based weights, Lasso, Random forests (RFs) and ridge) is proposed. Haque et al. [44] have proposed an ensemble algorithm that uses a combination of WT and fuzzy ARTMAP (FA) network. WT is used for data filtering and the ensemble model is optimized by a Firefly (FF) algorithm. In [45], an RF model is used as an ensemble learning method to combine the forecasts generated by SVMs.

    Probabilistic models based on higher-order Markov chain and Bayesian methods are proposed in [46,47] respectively. An Intelligent solar power forecasting model based on Fuzzy logic is proposed in [48]. The forecasting model is applied for different types of days such as clear, hazy, partly cloudy and cloudy. Solar irradiance forecasting is a crucial factor in solar power forecasting. In [49], authors have proposed a corrective algorithm for improving the accuracy of global horizontal irradiation. An ANN is used to improve solar irradiance forecasts which are obtained from numerical weather prediction. In addition, descriptive reviews on available solar power forecasting methods can be found in [50,51].

    According to the present literature, ensemble models show better forecasting performance than single machine learning methods [5,6,8,12,19,23,24,40,41,42,43,44,45]. Ensemble models are implemented in two different ways. One way is to cluster the dataset and apply a separate ML technique to each cluster [6,8,12,19,23,24,42]. The second method is to use different ML techniques on the same dataset and to derive more accurate forecasts by agammaegating the ML results [5,40,41,43,44,45]. In this paper, the authors have combined the above two approaches to provide a more accurate solar power forecast. Cloud cover data are used to cluster the dataset and ensembles consist of three ML techniques (RF, SVM, DBN) are used to model the solar power generation separately for each cluster. Moreover, it can be observed that the previously proposed ensemble models lack deep learning techniques such as DBNs, LSTM networks, etc. [5,40,41,42,43,44,45]. This limitation is addressed in this work by integrating a DBN to the proposed ensemble model. The results of three case studies are used to design the proposed ensemble model. This research extends the previous work proposed by the same authors in [52]. The proposed RF, SVM and DBN models in [52] are utilized in this work to implement stacking and averaging ensemble models. Then, by combining these two ensembling approaches, a generalized ensemble model is proposed. The proposed solar power generation model is applied to different clusters of the dataset to precisely capture the relationship between solar power generation and weather parameters. The accuracy of the proposed solar power forecasting model is compared with that of single machine learning techniques and several models present in literature.

    Forecasted weather parameters are used as inputs to the proposed ensemble model. The weather parameters can be forecasted using NWP and have shown good accuracy in the short-term [53]. The forecasting horizon can be varied. However, for long-term forecasting, there will be a large error in the NWP. In this work, all the error terms are presented without the NWP error.

    The paper is organized as follows. The design and implementation of the proposed ensemble model and several related case studies are described in Section 2. Section 3 presents a discussion on the outcomes of the proposed model. Section 4 concludes the paper.

    The implementation procedure of the proposed ensemble model is explained in this section. Results of three case studies are analyzed to design the forecasting model which can predict the solar power generation with minimum forecasting error. The dataset explained in section 2.1 is used for training and validating the forecasting models implemented in case studies. Section 2.2 explains the feature selection procedure which is used to identify the most relevant weather parameters that affect the solar power generation. The performance measure is described in section 2.3. Brief descriptions of three ML algorithms which are used to build the ensemble model are given in section 2.4. In Section 2.5, case studies and results are presented. Feature selection and ensemble design are done by R programming language which is widely used for statistical computing. R Studio, free and open-source software is used as the development environment.

    The data set of 21 German Photovoltaic (PV) facilities has been used in this study [22]. The nominal capacity of PV facilities ranges between 100 kW and 8500 kW i.e., the PV facilities range from rooftop solar arrays to solar farms. Historical NWP data and respective solar power outputs for 990 days are recorded in a three-hour resolution for each facility. The PV facilities are distributed throughout Germany as shown in Figure 1. The data set is normalized to improve the training performance of the DBN. Min-max normalization is used to normalize the weather parameters between 0 and 1. Solar power generation data are normalized by dividing the measured output power by the nominal output capacity of the corresponding PV facility. This helps to compare the forecasting accuracy without considering the PV capacity. All the observations are shuffled and 0.75 and 0.25 proportions of that are selected for training and testing, respectively.

    Figure 1.  The locations of PV facilities.

    The dataset contains more than 30 weather parameters. Therefore, to reduce the complexity of the ML algorithms, most important weather parameters should be identified. Feature selection algorithms such as Boruta can be used to identify the strong correlations between solar power generation and weather parameters [54]. In feature selection, Boruta expands the given dataset with permuted copies of all independent features. Then, the data in those permuted copies are shuffled to remove the dependencies with the target variable. These shuffled permuted copies are called shadow features. After that, an RF classifier is applied on the combined dataset and Boruta applies a feature importance measure such as mean decrease in accuracy to evaluate the importance of each variable. Finally, the classification algorithm stops either when all features get confirmed or rejected or it reaches a defined limit of RF iterations. Table 1 shows the most important weather parameters according to the Boruta algorithm. All the features with importance equal to or greater than 5 are selected as inputs to the ensemble model.

    Table 1.  Most important weather parameters for solar power generation obtained by the Boruta algorithm.
    Weather parameter Importance
    Lower wind speed 8.08
    Sun position solar height 8.86
    Sun position theta Z 8.91
    Clear sky (direct) 9.12
    Sun position extraterrestrial 9.21
    Wind component U at 0m 9.33
    Clear sky (diffuse) 9.49
    Potential vorticity at 1000m 9.59
    Clear sky global 9.70
    Wind component U at 100m 9.94
    Dewpoint temperature at 0m 10.14
    Potential vorticity at 950m 10.98
    Temperature at 0m 11.03
    Clear sky diffuse aggregated 11.06
    Clear sky global aggregated 11.77
    Clear sky direct aggregated 12.15
    Sun position solar azimuth 12.45
    Solar radiation diffuse at 0m 12.89
    Surface pressure at 0m 12.94
    Albedo 14.62
    Solar radiation global at 0m 16.86
    Solar radiation direct at 0m 17.28
    Total cloud cover at 0m 18.36
    Relative humidity at 1000m 18.38
    Relative humidity at 950m 18.51
    Relative humidity at 0m 19.03

     | Show Table
    DownLoad: CSV

    RMSE is used to evaluate the forecasting performance of the proposed ensemble models. It is the standard deviation of the prediction errors as shown in Eq 1. In RMSE calculation, the errors are squared before they are averaged. Therefore, the RMSE increases significantly when there are large prediction errors. Thus, RMSE is a more useful performance measure than MAE especially when large prediction errors are undesirable.

    RMSE(x|,x)=1NNn=1(xn|xn)2 (1)

    where N is the total number of predictions, x|n is the nth predicted value and xn is the nth actual value.

    ML algorithms can be used to model ill-defined and complex systems (for e.g., solar PV facility) without considering the unknown nonlinear relationships within the system. Three ML algorithms i.e., DBN, SVM, and RF explained in [52] are used in this work to implement the proposed ensemble models. Moreover, the design parameters of each algorithm are explained in detail in [52].

    In SVR, the model inputs i.e. forecasted weather parameters are mapped to a high dimensional feature space using a Kernel function such as polynomial or Gaussian. Then, a linear regression function is computed which has at most ε deviation from the actual model outputs for all the training data. At the same time, the linear regression function should be kept as flat as possible. A symmetrical loss function is used to train the SVR model. Hence, a flexible cylinder of a minimal radius is symmetrically wrapped around the regression function to ignore the absolute values of errors less than ε.

    The Kernel function of the SVR model is selected as Laplace which is a general-purpose Kernel used in both classification and regression applications. The cost of violation of constraints and the ε are obtained using the trial and error method and found to be 10 and 0.1 respectively.

    DBN is a type of deep neural network which comprises of multiple hidden layers. Multiple Restricted Boltzmann Machines (RBMs) are stacked to build the DBN. Firstly, greedy layer-wise training is done sequentially starting from the bottom layer (input layer). Each RBM layer learns a higher data representation of the preceding layer. This pre-training procedure allows better initialization of the weights of all the layers. Gibbs sampling based Contrastive Divergence (CD) method [36] is used for this initial pre-training process of stacked RBMs. Then, the weights are fine-tuned to improve DBN accuracy. The optimal values of the weights are calculated in this fine-tuning process which is generally done using the back-propagation method. In this study, two hidden layers are integrated to the DBN. The number of neurons per each layer is obtained using the trial and error method. The most appropriate number of neurons for the first and second hidden layers is found to be 7 and 4 respectively. The Sigmoid activation function is used in the RBMs. The number of training epochs is selected as 1500.

    RF is an ensemble learning method which can be used for both regression and classification problems. It provides predictions by agammaegating decisions from a set of base models such as decision trees or SVMs. A technique called bootstrap agammaegation (bagging) is used to train each base model on a different data sample in which the sampling is done with replacement. In this work, decision trees are used as the base models in the RF. The cross-validation method is used as the resampling method and the number of resampling iterations is selected as 10.

    Two basic ensemble models have implemented in this work i.e., stacking ensemble and averaging ensemble. In subsections 2.5.1 and 2.5.2, brief descriptions of stacking and averaging ensembles are given.

    In the stacking ensemble, ML algorithms are stacked in a way such that the outputs of one-layer are considered as inputs of the following layer. This can be schematically illustrated as in Figure 2. The input training dataset X consists of m observations and n features. Then, an M number of ML models are trained using X. Each single ML algorithm provides prediction y which are then cast into a second-level training dataset consists of m observations and M features. The main contribution of the second-layer ML model is to minimize the regression error of the preceding ML models. In other words, a second regression model is trained to learn the error that first regression models have made. The integration of the estimated errors to the outputs of the first layer models can provide an improved prediction.

    Figure 2.  A schematic diagram of the ensemble approach.

    In averaging ensemble model, the layer 2 model illustrated in Figure 2 is replaced with Eq 2.

    [ˆyfinal]=(a1ˆyμ1+a2ˆyμ2++aMˆyμM)÷(a1+a2++aM) (2)

    where a1,a2,,aM are the weights assigned to the outputs of layer 1.

    The contribution of each ML method residing in layer 1 is weighted proportionally to the trust or performance of the member.

    Three case studies are conducted to find out the most accurate way of modeling solar power generation using ensemble forecasting techniques. Stacking and averaging based ensemble learning methods are used in this work. In the first case study, the ensemble models are used to forecast solar power generation without clustering the dataset. In case study 2, weather type classification is applied on the dataset and an ensemble model is implemented on the clustered dataset for forecasting solar power generation. In case study 3, the results of case studies 1 and 2 are analyzed to identify the contributions and limitations of the weather classification approach. Then, a new ensemble model is implemented incorporating the benefits of weather classification while eliminating the limitations.

    DBN, SVM, and RF algorithms are used to build the stacking and averaging ensemble models. Figure 3 illustrates the ensemble models. In the stacking model, filtered weather parameters are given to the three ML algorithms as inputs. The three outputs i.e. solar power forecasts of DBN, SVM, and RF models are used as inputs to another DBN which provides the final output i.e., the forecasted solar power generation. The average RMSE of predictions given by this model is 0.0636. Furthermore, authors have used SVM and RF models as the second layer of the stacking model and it is found that the respective average RMSE values are 0.0701 and 0.0687. Hence, the forecasting error is minimum with the DBN model. The second ensemble is a simple model which provides the weighted average of DBN, SVM and RF forecasts as the solar power generation. The most suitable weights for each single ML technique are determined by the trial and error method. The weights providing the lowest RMSE are selected. In this problem, weights of DBN, SVM, and RF are found to be 5, 3 and 8 respectively.

    Figure 3.  (a) Stacking ensemble model. (b) averaging ensemble model.

    When considering a generalized ensemble model applicable for any PV facility, the ensemble model i.e., either stacking or averaging model providing the minimum training error (t_err) should be assigned to the respective PV facility. Figure 4 illustrates this generalized ensemble model.

    Figure 4.  Generalized Ensemble model.

    Solar PV output mainly depends on both direct and diffuse solar irradiance i.e., the total solar irradiance received by the tilted solar panel. If the tilted irradiance is zero (for e.g., in nighttime), the solar panel output will be zero. Therefore, if the total irradiance is zero, the solar PV output is set as zero without utilizing the forecasting model. This nighttime filter will decrease the forecasting error and the computational time in the testing stage. Thus, the forecasting accuracy and the computational efficiency of the ensemble model will be increased.

    The Ensemble models shown in Figures 3 and 4 are used to predict solar power output of 21 solar facilities located in Germany. The RMSE values of solar power predictions for different PV facilities are tabulated in Table 2. In addition, the RMSEs of DBN, SVM, and RF predictions are shown in Table 2 allowing comparison between single and ensemble ML techniques.

    Table 2.  RMSE values of single ML methods and ensemble models.
    PV facility Single models RMSE values Ensemble models RMSE values
    DBN SVM RF Stacking weighted average Generalized
    PV1 0.0589 0.0569 0.0573 0.0544 0.0561 0.0544
    PV2 0.0573 0.0551 0.0558 0.052 0.0545 0.052
    PV3 0.0419 0.0415 0.0427 0.0417 0.0412 0.0412
    PV4 0.0441 0.0405 0.0413 0.0393 0.0405 0.0393
    PV5 0.0519 0.0501 0.0513 0.0554 0.05 0.05
    PV6 0.0607 0.0587 0.0592 0.0659 0.0576 0.0576
    PV7 0.0823 0.0782 0.0805 0.0798 0.078 0.078
    PV8 0.0826 0.0779 0.0783 0.07 0.0773 0.07
    PV9 0.0604 0.0607 0.0595 0.0555 0.0586 0.0555
    PV10 0.0464 0.0467 0.0465 0.0475 0.0457 0.0457
    PV11 0.0871 0.0871 0.0881 0.0891 0.0855 0.0855
    PV12 0.097 0.095 0.0923 0.0875 0.0925 0.0875
    PV13 0.0927 0.0928 0.0911 0.1046 0.0906 0.0906
    PV14 0.0607 0.0597 0.0601 0.0543 0.0586 0.0543
    PV15 0.0602 0.0586 0.0615 0.0634 0.0586 0.0586
    PV16 0.0672 0.0659 0.0644 0.0606 0.0643 0.0606
    PV17 0.0615 0.0604 0.0606 0.0639 0.0596 0.0596
    PV18 0.0568 0.0561 0.0569 0.0598 0.0552 0.0552
    PV19 0.0643 0.0639 0.0639 0.0697 0.0626 0.0626
    PV20 0.0664 0.0662 0.0641 0.0671 0.0641 0.0641
    PV21 0.0714 0.0662 0.0649 0.0547 0.0649 0.0547
    Average RMSE 0.0653 0.0637 0.0638 0.0636 0.0627 0.0608

     | Show Table
    DownLoad: CSV

    The stacking and averaging ensembles explained in section 2.5.1 are used in this case study as well. In addition, training data set, feature selection, nighttime filter, and the performance measure remains the same for this section. In weather classification approach, the dataset is clustered to train and test each cluster separately.

    The dataset is clustered into three categories based on the cloud cover data. Then, three generalized ensemble models are trained for these three categories separately. In other words, for each weather condition, a separate generalized ensemble model (Figure 4) is assigned. The clustering process is done using the below rules.

    ● Clear (Sunny) - cloud cover between 0 and 0.25

    ● Overcast (Cloudy) - cloud cover between 0.75 and 1

    ● Partly cloudy - cloud cover between 0.25 and 0.75

    The average number of clear, overcast and partly cloudy observations in the dataset is 1400, 3260 and 1356 i.e., 23.27%, 54.19% and 22.54%, respectively. The ensemble model with weather classification is illustrated in Figure 5 in which ensembles 1, 2 and 3 are three generalized ensemble models. The RMSE values of solar power predictions of stacking and averaging ensembles with weather classification are shown in Table 3. However, the ensemble model with weather classification approach may require further modifications according to the forecasting errors of each cluster.

    Figure 5.  Ensemble model with weather classification.
    Table 3.  RMSE values of the stacking and averaging based models with weather classification.
    PV facility Clear Partly cloudy Overcast
    Stacking weighted average Stacking weighted average Stacking weighted average
    PV1 0.0501 0.0474 0.0724 0.0727 0.0532 0.056
    PV2 0.0312 0.0454 0.0679 0.0689 0.0621 0.0578
    PV3 0.0254 0.0283 0.0707 0.0596 0.0507 0.0415
    PV4 0.0361 0.0386 0.04 0.0563 0.0393 0.0426
    PV5 0.0307 0.0421 0.0614 0.0573 0.0538 0.0498
    PV6 0.0875 0.0773 0.0716 0.0794 0.0361 0.0511
    PV7 0.052 0.0585 0.0873 0.0852 0.0776 0.0844
    PV8 0.0777 0.075 0.1191 0.1031 0.0772 0.0703
    PV9 0.0561 0.051 0.0796 0.0694 0.066 0.0582
    PV10 0.0336 0.0389 0.0487 0.0502 0.0505 0.0477
    PV11 0.0964 0.0928 0.1106 0.1081 0.0764 0.0759
    PV12 0.1035 0.1042 0.1186 0.1135 0.0762 0.0848
    PV13 0.0898 0.0854 0.1006 0.1051 0.0849 0.0709
    PV14 0.0533 0.0546 0.0689 0.0625 0.0491 0.0564
    PV15 0.0418 0.0651 0.0741 0.07 0.0621 0.0608
    PV16 0.0831 0.0681 0.0955 0.0726 0.0587 0.0622
    PV17 0.0623 0.06 0.0592 0.0631 0.0581 0.0561
    PV18 0.0455 0.0447 0.0477 0.0561 0.0635 0.0538
    PV19 0.055 0.0566 0.0793 0.075 0.0636 0.0566
    PV20 0.0704 0.0638 0.0859 0.0773 0.0658 0.0631
    PV21 0.0625 0.0717 0.0829 0.0911 0.0487 0.0535
    Average RMSE 0.0592 0.0604 0.0782 0.076 0.0606 0.0597

     | Show Table
    DownLoad: CSV

    The results of case studies 1 and 2 can be used to evaluate the contributions and limitations of the weather classification approach. However, the results of case study 1 are not categorized w.r.t different weather conditions. Hence, RMSE values shown in the fifth and sixth columns of Table 2 are categorized according to the respective weather conditions and then tabulated in Table 4.

    Table 4.  RMSE values of the stacking and averaging based ensembles for different weather conditions (without weather classification).
    PV facility Clear Partly cloudy Overcast
    Stacking weighted average Stacking weighted average Stacking weighted average
    PV1 0.0678 0.0473 0.0771 0.0707 0.0583 0.0517
    PV2 0.0336 0.0454 0.0706 0.064 0.0468 0.0535
    PV3 0.0435 0.0332 0.0446 0.0522 0.043 0.0386
    PV4 0.0253 0.0309 0.0594 0.0515 0.0401 0.0395
    PV5 0.0436 0.0404 0.046 0.0554 0.0553 0.0511
    PV6 0.0846 0.0711 0.0813 0.0666 0.0533 0.0488
    PV7 0.0768 0.0709 0.0933 0.0872 0.0832 0.0768
    PV8 0.1229 0.076 0.0967 0.0823 0.067 0.0756
    PV9 0.049 0.0538 0.0994 0.0706 0.0514 0.0555
    PV10 0.0335 0.0397 0.0511 0.0537 0.0384 0.0445
    PV11 0.1053 0.095 0.0736 0.0927 0.0685 0.0774
    PV12 0.0997 0.1053 0.1073 0.1073 0.081 0.0784
    PV13 0.1027 0.0923 0.1185 0.1065 0.0796 0.0827
    PV14 0.0429 0.0512 0.0739 0.074 0.0493 0.055
    PV15 0.0571 0.0545 0.076 0.0643 0.0593 0.0577
    PV16 0.0684 0.0656 0.0638 0.0679 0.051 0.0625
    PV17 0.0517 0.0617 0.0639 0.0633 0.0597 0.0567
    PV18 0.047 0.047 0.084 0.0655 0.0599 0.0537
    PV19 0.0496 0.057 0.0722 0.0735 0.0522 0.0601
    PV20 0.0636 0.0641 0.0728 0.0696 0.0606 0.0622
    PV21 0.0737 0.0794 0.0754 0.0809 0.0539 0.0527
    Average RMSE 0.0639 0.061 0.0762 0.0724 0.0577 0.0588

     | Show Table
    DownLoad: CSV

    Tables 3 and 4 show that the RMSE of the predictions in clear weather situation is less when cloud classification is applied. In other words, the forecasting error is less in clear weather condition if the clear ensemble i.e. ensemble 1 is trained only by clear data. But the error values are higher when partly cloudy and overcast models are trained separately by the data of each weather category. Therefore, a modification is required to acquire the forecasting performance improvement in the clear weather condition and reduce the forecasting errors in partly cloudy and overcast weather conditions. By comparing Tables 3 and 4 results, it can be observed that when the stacking and weighted average ensemble models are is trained with the whole dataset, the forecasting performance is better in overcast and partly cloudy weather situations. A new ensemble is developed considering these facts. Figure 6 illustrates the modified ensemble model. Ensembles 1 and 2 illustrated in Figure 6 are two generalized ensemble models. Ensemble 1 is trained using only clear weather data and it is used to forecast solar power generation in clear weather conditions. On the other hand, ensemble 2 is trained by the whole dataset and it is used to forecast solar power generation in partly cloudy and overcast weather conditions. Table 5 shows the RMSE values of predictions of the modified ensemble model illustrated in Figure 6.

    Figure 6.  Modified ensemble model with weather classification.
    Table 5.  RMSE values of modified ensemble model with weather classification.
    PV facility RMSE
    PV1 0.0557
    PV2 0.0486
    PV3 0.0379
    PV4 0.0398
    PV5 0.0464
    PV6 0.0572
    PV7 0.0743
    PV8 0.0725
    PV9 0.0556
    PV10 0.0408
    PV11 0.0760
    PV12 0.0910
    PV13 0.0878
    PV14 0.0541
    PV15 0.0562
    PV16 0.0571
    PV17 0.0567
    PV18 0.0545
    PV19 0.0568
    PV20 0.0631
    PV21 0.0597
    Average RMSE 0.0591

     | Show Table
    DownLoad: CSV

    In this section, the results obtained from the case studies are discussed in detail. Section 3.1 analyzes the results obtained from case study 1 which is conducted without clustering the dataset. Then, the results of the weather classification approach are discussed in section 3.2. The benefits of weather classification approach and the percentage error reductions of the proposed ensemble w.r.t single ML techniques are presented in section 3.3.

    The results of case study 1 show that the SVM model has a better forecasting ability than RF and DBN models. Thus, predictions having the lowest average RMSE of 0.0637, the SVM model is the most suitable forecasting model in the single ML method's category. However, the average RMSE of RF is also very close to that of SVM. Even though DBN model predictions have the largest average RMSE, it provides the lowest RMSE values for PV10 and PV11 plants. These results show that it is difficult to specify a forecasting technique in common for all PV facilities to obtain solar power forecasts with the highest accuracy. It can be observed that the stacking ensemble method provides better results for 1, 2, 4, 8, 9, 12, 14, 16, 21 PV plants and the weighted average method provides better results for the remaining PV plants. Both the stacking method and the weighted average method gives approximately equal average RMSE values for all 21 PV facilities. The generalized model RMSE is the minimum of RMSE of either stacking model or weighted average model. Generalized ensemble model without weather classification reduces the average RMSE by 7.4%, 4.77%, and 4.93% w.r.t DBN, SVM and RF models respectively.

    The results of case study 2 show that the average RMSEs of stacking and averaging ensembles are 0.0647 and 0.0639 respectively. It can be clearly observed that these error values are larger than those of the case study 1. Therefore, the weather type clustering approach does not improve the forecasting accuracy. Instead, it increases the forecasting error. To identify the most effective way of applying weather type classification, the results of case study 1 should be categorized into different weather conditions. Hence, case study 3 is conducted to find out the most effective method of clustering dataset.

    As can be seen in Table 4, the forecasting performance is higher in overcast weather situations than partly cloudy and clear weather conditions. This can happen due to two reasons. Firstly, the number of overcast observations in the dataset is more than twice as the partly cloudy or clear data and the error is obviously get reduced when the data set is large. Secondly, the variation of cloud cover is not rapid in overcast weather. Although there is a smaller number of clear weather data, the forecasting performance is high for clear weather conditions, as there are no rapid changes in the sky when the sky is clear. Forecast error depends on the variability of solar irradiance [55]. Variability is the largest in partly cloudy conditions and that is the main reason for the differences in forecast error. The forecasting error is high for partly cloudy weather conditions. This may happen due to two reasons. When the sky is partly cloudy, the cloud cover can rapidly change, unlike in clear or overcast weather conditions and that can greatly affect the accuracy of solar forecasting. Secondly, the number of observations available for this weather category is less when compared to the overcast weather condition.

    In case study 3, a new modified ensemble model with partial weather classification is implemented according to the results presented in Tables 3 and 4. The average RMSE of the ensemble implemented in case study 3 is 0.0591 which is less than that of the generalized ensemble without weather classification (0.0608). Thus, it shows approximately 3% reduction in the forecasting error.

    Compared to single DBN, SVM and RF models without cloud classification, this new ensemble model with cloud classification reduces RMSE error by 10.49%, 7.78%, and 7.95% respectively. In addition, the results of the proposed model can be compared with the available literature which uses the same dataset for training the forecasting models [22,38]. The RMSE error reductions compared to the work presented in [22] and [38] are 20.64% and 27.58% respectively.

    This paper proposes a novel ensemble model for solar power forecasting. The results show that the proposed ensemble model performs better than single machine learning techniques. In this study, the used training data set mostly consists of cloudy weather data i.e. more than half of the total observations belong to the overcast range. Therefore, the performance increase due to the weather classification is somewhat small (approximately 3% error reduction). The forecasting accuracy will further be increased for tropical countries because of the large proportion of clear data in the dataset. In other words, if clear data contribute to the higher proportion of the dataset, the reduction of RMSE from sunny solar predictions increases and hence, the forecasting ability of the ensemble will be increased. The ensemble can be improved by adding more machine learning algorithms such as convolutional neural networks, LSTMs, etc. It should be noted that the error terms do not incorporate NWP errors. Therefore, in the real-world application, the error terms may increase further. The longer the forecasting horizon the larger the forecasting error. Even though the model is trained using data having the 3-hour resolution, the solar power forecasts can be generated for any time period resolution as it depends on the resolution of forecasted weather parameters. It can be concluded that the combination of weather classification approach and ensemble learning technique provides more accurate solar power generation forecasts than conventional single ML techniques.

    All authors declare no conflicts of interest in this paper.

    [1] [M. Adimy and F. Crauste, Modeling and asymptotic stability of a growth factor-dependent stem cell dynamics model with distributed delay, Discrete & Continuous Dynamical Systems-Series B, 8 (2017), 19-38.
    [2] [E. Alzahrani, H. Akca and X. Li, New synchronization schemes for delayed chaotic neural networks with impulses, Neural Computing and Applications, 28 (2017), 2823-2837.
    [3] [ F. Amato, G. De Tommasi and A. Pironti, Necessary and sufficient conditions for finite-time stability of impulsive dynamical linear systems, Automatica, 49 (2013), 2546-2550.
    [4] [ G. Ballinger and X. Liu, On boundedness of solutions of impulsive systems, Nonlinear Studies, 4 (1997), 121-131.
    [5] [ G. Ballinger and X. Liu, On boundedness of solutions for impulsive systems in terms of two measures, Nonlinear World, 4 (1997), 417-434.
    [6] [ T. Caraballo, M. Hammami and L. Mchiri, Practical exponential stability of impulsive stochastic functional differential equations, Systems & Control Letters, 109 (2017), 43-48.
    [7] [ W. Chen, X. Lu and W. Zheng, Impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks, IEEE Transactions on Neural Networks and Learning Systems, 26 (2015), 734-748.
    [8] [ W. Chen, D. Wei and X. Lu, Exponential stability of a class of nonlinear singularly perturbed systems with delayed impulses, Journal of the Franklin Institute, 350 (2013), 2678-2709.
    [9] [ W. Chen and W. Zheng, Input-to-state stability and integral input-to-state stability of nonlinear impulsive systems with delays, Automatica, 45 (2009), 1481-1488.
    [10] [ W. Chen and W. Zheng, The effect of delayed impulses on stability of impulsive time-delay systems, IFAC Proceedings Volumes, 44 (2011), 6307-6312.
    [11] [ W. Chen and W. Zheng, Exponential stability of nonlinear time-delay systems with delayed impulse effects, Automatica, 47 (2011), 1075-1083.
    [12] [ P. Cheng and F. Deng, Global exponential stability of impulsive stochastic functional differential systems, Statistics and Probability Letters, 80 (2010), 1854-1862.
    [13] [ S. Dashkovskiy and A. Mironchenko, Input-to-State stability of nonlinear impulsive systems, SIAM J. Control Optim, 51 (2012), 1962-1987.
    [14] [ M. Dellnitz, M. Hessel and A. Ziessler, On the computation of attractors for delay differential equations, Journal of Computational Dynamics, 3 (2016), 93-112.
    [15] [ S. Gao, D. Xie and L. Chen, Pulse vaccination strategy in a delayed sir epidemic model with vertical transmission, Discrete & Continuous Dynamical Systems-B, 7 (2007), 77-86.
    [16] [ K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dynamics, Dordrecht, The Netherlands: Kluwer, 1992.
    [17] [ J. Grizzle, G. Abba and F. Plestan, Asymptotically stable walking for biped robots: Analysis via systems with impulse effects, IEEE Transaction on Automatic Control, 46 (2001), 51-64.
    [18] [ K. Gu, J. Chen and V. Kharitonov, Stability of Time-delay Systems, Springer Science & Business Media, 2003.
    [19] [ K. Gu, V. Kharitonov and J. Chen, Stability of Time-Delay Systems, Birkhauser, Boston, MA, USA, 2003.
    [20] [ Y. Guo, G. Xu and Y. Guo, Stabilization of the wave equation with interior input delay and mixed Neumann-Dirichlet boundary, Discrete & Continuous Dynamical Systems-B, 21 (2016), 2491-2507.
    [21] [ J. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, NY, USA, 1993.
    [22] [ W. He, F. Qian and J. Cao, Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control, Neural Networks, 85 (2017), 1-9.
    [23] [ J. Hespanha and A. Morse, Stability of switched systems with average dwell-time, Proceedings of the 38th IEEE Conference on Decision and Control, (1999), 2655-2660.
    [24] [ L. Hetel, J. Daafouz and C. Iung, Equivalence between the lyapunov-Krasovskii functionals approach for discrete delay systems and that of the stability conditions for switched systems, Nonlinear Analysis: Hybrid Systems, 2 (2008), 697-705.
    [25] [ J. Hu, G. Sui, S. Du and X. Li, Finite-time stability of uncertain nonlinear systems with time-varying delay, Mathematical Problems in Engineering, 2017 (2017), Article ID 2538904, 9 pages.
    [26] [ A. Huseynov, Positive solutions of a nonlinear impulsive equation with periodic boundary conditions, Applied Mathematics and Computation, 217 (2010), 247-259.
    [27] [ H. Jin and M. Zacksenhouse, Oscillatory neural networks for robotic yo-yo control, IEEE Transactions on Neural Networks and Learning Systems, 14 (2003), 317-325.
    [28] [ A. Kartsatos, Advanced Ordinary Differential Equations, Mariner Publishing Co., Inc., Tampa, Fla., 1980.
    [29] [ A. Khadra, X. Liu and X. Shen, Analyzing the robustness of impulsive synchronization coupled by linear delayed impulses, IEEE Transactions on Automatic Control, 54 (2009), 923-928.
    [30] [ A. Khadra, X. Liu and X. Shen, Impulsively synchronizing chaotic systems with delay and applications to secure communication, Automatica, 41 (2005), 1491-1502.
    [31] [ Y. Kuang, Delay Differential Equations with Applications in Population Dynamics, volume 191 in the series of Mathematics in Science and Engineering, Academic Press, 1993.
    [32] [ L. Lee, Y. Liu, J. Liang and X. Cai, Finite time stability of nonlinear impulsive systems and its applications in sampled-data systems, ISA Transactions, 57 (2015), 172-178.
    [33] [ R. Leine and T. Heimsch, Global uniform symptotic attractive stability of the non-autonomous bouncing ball system, Physica D: Nonlinear Phenomena, 241 (2012), 2029-2041.
    [34] [ B. Li, Y. Chen and X. Lu, et al. A delayed HIV-1 model with virus waning term, Mathematical Biosciences and Engineering, 13 (2016), 135-157.
    [35] [ C. Li, X. Liao, X. Yang and T. Huang, Impulsive stabilization and synchronization of a class of chaotic delay systems, Chaos, 15 (2005), 043103, 9pp.
    [36] [ X. Li, New results on global exponential stabilization of impulsive functional differential equations with infinite delays or finite delays, Nonlinear Analysis: Real World Applications, 11 (2010), 4194-4201.
    [37] [ X. Li, Existence and global exponential stability of periodic solution for impulsive CohenGrossberg-type BAM neural networks with continuously distributed delays, Applied Mathematics and Computation, 215 (2009), 292-307.
    [38] [ X. Li, M. Bohner and C. Wang, Impulsive differential equations: Periodic solutions and applications, Automatica, 52 (2015), 173-178.
    [39] [ X. Li and J. Cao, An impulsive delay inequality involving unbounded time-varying delay and applications, IEEE Transactions on Automatic Control, 62 (2017), 3618-3625.
    [40] [ X. Li and Y. Ding, Razumikhin-type theorems for time-delay systems with Persistent impulses, Systems and Control Letters, 107 (2017), 22-27.
    [41] [ X. Li and X. Fu, Effect of leakage time-varying delay on stability of nonlinear differential Systems, Journal of the Franklin Institute, 350 (2013), 1335-1344.
    [42] [ X. Li and X. Fu, Lag synchronization of chaotic delayed neural networks via impulsive control, IMA Journal of Mathematical Control and Information, 29 (2012), 133-145.
    [43] [ X. Li and X. Fu, On the global exponential stability of impulsive functional differential equations with infinite delays or finite delays, Communications in Nonlinear Science and Numerical Simulation, 19 (2014), 442-447.
    [44] [ X. Li, P. Li and Q. Wang, Input/output-to-state stability of impulsive switched systems, Systems & Control Letters, 116 (2018), 1-7.
    [45] [ X. Li and R. Rakkiyappan, Delay-dependent global asymptotic stability criteria for stochastic genetic regulatory networks with Markovian jumping parameters, Applied Mathematical Modelling, 36 (2012), 1718-1730.
    [46] [ X. Li and R. Rakkiyappan, Stability results for Takagi-Sugeno fuzzy uncertain BAM neural networks with time delays in the leakage term, Neural Computing and Applications, 22 (2013), 203-219.
    [47] [ X. Li, R. Rakkiyappan and N. Sakthivel, Non-fragile synchronization control for Markovian jumping complex dynamical networks with probabilistic time-varying coupling delay, Asian Journal of Control, 17 (2015), 1678-1695.
    [48] [ X. Li, J. Shen and R. Rakkiyappan, Persistent impulsive effects on stability of functional differential equations with finite or infinite delay, Applied Mathematics and Computation, 329 (2018), 14-22.
    [49] [ X. Li and S. Song, Impulsive control for existence, uniqueness, and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 868-877.
    [50] [ X. Li and S. Song, Stabilization of delay systems: delay-dependent impulsive control, IEEE Transactions on Automatic Control, 62 (2017), 406-411.
    [51] [ X. Li and J. Wu, Stability of nonlinear differential systems with state-dependent delayed impulses, Automatica, 64 (2016), 63-69.
    [52] [ X. Li and J. Wu, Sufficient stability conditions of nonlinear differential systems under impulsive control with state-dependent delay, IEEE Transactions on Automatic Control, 63 (2018), 306-311.
    [53] [ X. Li, X. Zhang and S. Song, Effect of delayed impulses on input-to-state stability of nonlinear systems, Automatica, 76 (2017), 378-382.
    [54] [ X. Li, Q. Zhu and Donal O'Regan, pth Moment exponential stability of impulsive stochastic functional differential equations and application to control problems of NNs, Journal of the Franklin Institute, 351 (2014), 4435-4456.
    [55] [ Y. Li, H. Li, X. Ding and G. Zhao, Leader-follower consensus of multi-agent systems with time delays over finite fields, IEEE Transactions on Cybernetics, pp (2018), 1-6.
    [56] [ X. Liao, L. Wang and P. Yu, Stability of Dynamical Systems, Elsevier Press, Oxford, 2007.
    [57] [ D. Lin, X. Li and D. Regan, Stability analysis of generalized impulsive functional differential equations, Mathematical and Computer Modelling, 55 (2012), 1682-1690.
    [58] [ B. Liu, X. Liu, K. Teo and Q. Wang, Razumikhin-type theorems on exponential stability of impulsive delay systems, IMA Journal of Applied Mathematics, 71 (2006), 47-61.
    [59] [ C. Liu, X. Li and Donal O' Regan, Asymptotically practical stability of impulsive functional differential systems in terms of two measurements, Dynamic Systems and Applications, 22 (2013), 621-640.
    [60] [ J. Liu, X. Liu and W. Xie, Input-to-state stability of impulsive and switching hybrid systems with time-delay, Automatica, 47 (2011), 899-908.
    [61] [ L. Liu, T. Caraballo and X. Fu, Exponential stability of an incompressible non-Newtonian fluid with delay, Discrete & Continuous Dynamical Systems-B.
    [62] [ X. Liu, Stability results for impulsive differential systems with applications to population growth models, Dynamics and Stability of Systems, 9 (1994), 163-174.
    [63] [ X. Liu and G. Ballinger, Uniform asymptotic stability of impulsive delay differential equations, Computers and Mathematics with Applications, 41 (2001), 903-915.
    [64] [ X. Liu, X. Shen, Y. Zhang and Q. Wang, Stability criteria for impulsive systems with time delay and unstable system matrices, IEEE Transactions on Circuits ond Systems-Ⅰ: Regular Papers, 54 (2007), 2288-2298.
    [65] [ X. Liu and Q. Wang, On stability in terms of two measures for impulsive systems of functional differential equations, Journal of Mathematical Analysis and Applications, 326 (2007), 252-265.
    [66] [ X. Liu and Q. Wang, The method of Lyapunov functionals and exponential stability of impulsive systems with time delay, Nonlinear Analysis, 66 (2007), 1465-1484.
    [67] [ X. Liu, S. Zhong and X. Ding, Robust exponential stability of impulsive switched systems with switching delays: A Razumikhin approach, Communications in Nonlinear Science and Numerical Simulation, 17 (2012), 1805-1812.
    [68] [ J. Lu, D. Ho and J. Cao, A unified synchronization criterion for impulsive dynamical networks, Automatica, 46 (2010), 1215-1221.
    [69] [ Z. Luo and J. Shen, Stability results for impulsive functional differential equations with infinite delays, J. Comput. Appl. Math, 131 (2001), 55-64.
    [70] [ Z. Luo and J. Shen, Stability of impulsive functional differential equations via the Lyapunov functional, Appl. Math. Lett, 22 (2009), 163-169.
    [71] [ X. Lv and X. Li, Finite time stability and controller design for nonlinear impulsive sampleddata systems with applications, ISA Transactions, 70 (2017), 30-36.
    [72] [ H. Miao, Z. Teng and C. Kang, Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays, Discrete & Continuous Dynamical Systems-B, 22 (2017), 2365-2387.
    [73] [ P. Naghshtabrizi, J. Hespanha and A. Teel, Exponential stability of impulsive systems with application to uncertain sampled-data systems, Systems & Control Letters, 57 (2008), 378-385.
    [74] [ S. Nersesov and W. Haddad, Finite-time stabilization of nonlinear impulsive dynamical systems, Nonlinear Analysis: Hybrid Systems, 2 (2008), 832-845.
    [75] [ E. Nikolopoulou, L. R. Johnson, D. Harris, J. D. Nagy, E. C. Stites and Y. Kuang, Tumour-immune dynamics with an immune checkpoint inhibitor, Letters in Biomathematics, 5 (2018), S137-S159.
    [76] [ T. Nishikawa, Y. Lai and F. Hoppensteadt, Capacity of oscillatory associative-memory networks with error-free retrieval, Phys. Rev. Lett., 92 (2004), 108101.
    [77] [ L. Pan and J. Cao, Exponential stability of impulsive stochastic functional differential equations, Journal of Mathematical Analysis and Applications, 382 (2011), 672-685.
    [78] [ L. Pan and J. Cao, Robust stability for uncertain stochastic neural network with delay and impulses, Neurocomputing, 94 (2012), 102-110.
    [79] [ G. Peralta and K. Kunisch, Interface stabilization of a parabolic-hyperbolic pde system with delay in the interaction, Discrete & Continuous Dynamical Systems-A, 38 (2018), 3055-3083.
    [80] [ R. Rakkiyappan, R. Sivasamy and X. Li, Synchronization of identical and nonidentical memristor-based chaotic systems via active backstepping control technique, Circuits, Systems, and Signal Processing, 34 (2015), 763-778.
    [81] [ R. Rakkiyappan, G. Velmurugan, X. Li and D. Regan, Global dissipativity of memristor-based complex-valued neural networks with time-varying delays, Neural Computing and Applications, 27 (2016), 629-649.
    [82] [ A. Ruiz, D. Owens and S. Townley, Existence, learning, and replication of periodic motion in recurrent neural networks, IEEE Transactions on Neural Networks and Learning Systems, 9 (1998), 651-661.
    [83] [ E. Rutter and Y. Kuang, Global dynamics of a model of joint hormone treatment with dendritic cell vaccine for prostate cancer, DCDS-B, 22 (2017), 1001-1021.
    [84] [ J. Shen, J. Li and Q. Wang, Boundedness and periodicity in impulsive ordinary and functional differential equations, Nonlinear Analysis, 65 (2006), 1986-2002.
    [85] [ J. Shen, Z. Luo and X. Liu, Impulsive stabilization of functional differential equations via liapunov functionals, Journal of Mathematical Analysis and Applications, 240 (1999), 1-15.
    [86] [ W. Shen and Z. Shen, Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media, Communications on Pure & Applied Analysis, 15 (2016), 1193-1213.
    [87] [ X. Song and J. H. Park, Linear optimal estimation for discrete-time measurement-delay systems with multi-channel multiplicative noise, IEEE Transactions on Circuis and Systems Ⅱ-Express Brief, 64 (2017), 156-160.
    [88] [ X. Song, J. H. Park and X. Yan, Linear estimation for measurement-delay systems with periodic coefficients and multiplicative noise, IEEE Transactions on Automatic Control, 62 (2017), 4124-4130.
    [89] [ I. Stamova, Stability Analysis of Impulsive Functional Differential Equations, Walter de Gruyter, Berlin, 2009.
    [90] [ I. Stamova, Vector Lyapunov functions for practical stability of nonlinear impulsive functional differential equations, Journal of Mathematical Analysis and Applications, 325 (2007), 612-623.
    [91] [ I. Stamova, T. Stamov and X. Li, Global exponential stability of a class of impulsive cellular neural networks with supremums, Internat. J. Adapt. Control Signal Process, 28 (2014), 1227-1239.
    [92] [ I. Stamova, T. Stamov and N. Simeonova, Impulsive effects on the global exponential stability of neural network models with supremums, European Journal of Control, 20 (2014), 199-206.
    [93] [ X. Tan, J. Cao and X. Li, Consensus of Leader-Following Multiagent Systems: A Distributed Event-Triggered Impulsive Control Strategy, IEEE Transactions on Cybernetics, 99 (2018), 1-10.
    [94] [ X. Tan, J. Cao and X. Li, Leader-following mean square consensus of stochastic multi-agent systems with input delay via event-triggered control, IET Control Theory & Applications, 12(2018), 299-309.
    [95] [ J. Tseng, Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays, Discrete & Continuous Dynamical Systems-A, 33 (2013), 4693-4729.
    [96] [ B. Vadivoo, R. Ramachandran, J. Cao, H. Zhang and X. Li, Controllability analysis of nonlinear neutral-type fractional-order differential systems with state delay and impulsive effects, International Journal of Control, Automation and Systems, 16 (2018), 659-669.
    [97] [ D. Wang, Emergent synchrony in locally coupled neural oscillators, IEEE Transactions on Neural Networks and Learning Systems, 6 (1995), 941-948.
    [98] [ H. Wang, S. Duan, T. Huang and J. Tan, Synchronization of memristive delayed neural networks via hybrid impulsive control, Neurocomputing, 267 (2017), 615-623.
    [99] [ H. Wang, S. Duan, C. Li, L. Wang and T. Huang, Stability of impulsive delayed linear differential systems with delayed impulses, Journal of the Franklin Institute, 352 (2015), 3044-3068.
    [100] [ H. Wang, J. Li and Y. Kuang, Enhanced modeling of the glucose-insulin system and its applications in insulin therapies, J. Biological Dynamics, 3 (2009), 22-38.
    [101] [ L. Wang, M. Yu and P. Niu, Periodic solution and almost periodic solution of impulsive Lasota-Wazewska model with multiple time-varying delays, Computers and Mathematics with Applications, 64 (2012), 2383-2394.
    [102] [ Q. Wang and X. Liu, Exponential stability for impulsive delay differential equations by Razumikhin method, Journal of Mathematical Analysis and Applications, 309 (2005), 462-473.
    [103] [ Q. Wang and X. Liu, Impulsive stabilization of delay differential systems via the LyapunovRazumikhin method, Applied Mathematics Letters, 20 (2007), 839-845.
    [104] [ Q. Wang and X. Liu, Stability criteria of a class of nonlinear impulsive switching systems with time-varying delays, Journal of the Franklin Institute, 349 (2012), 1030-1047.
    [105] [ Q. Wang and X. Liu, The method of Lyapunov functionals and exponential stability of impulsive systems with time delay, Nonlinear Analysis, 66 (2007), 1465-1484.
    [106] [ Y. Wang, X. Shi, Z. Zuo, M. Chen and Y. Shao, On finite-time stability for nonlinear impulsive switched systems, Nonlinear Analysis: Real World Applications, 14 (2013), 807-814.
    [107] [ N. Wouw, P. Naghshtabrizi, M. Cloosterman and J. Hespanha, Tracking control for sampleddata systems with uncertain timevarying sampling intervals and delays, International Journal of Robust and Nonlinear Control, 20 (2010), 387-411.
    [108] [ Q. Wu, J. Zhou and L. Xiang, Global exponential stability of impulsive differential equations with any time delays, Applied Mathematics Letters, 23 (2010), 143-147.
    [109] [ T. Yang, Impulsive Control Theory, Berlin, Germany: Springer, 2001.
    [110] [ X. Yang, J. Cao and J. Lu, Stochastic synchronization of complex networks with nonidentical nodes via hybrid adaptive and impulsive control, IEEE Transactions On Circuits And Systems-Ⅰ: Regular Papers, 59 (2012), 371-384.
    [111] [ X. Yang, X. Li and J. Cao, Robust finite-time stability of singular nonlinear systems with interval time-varying delay, Journal of Franklin Institute, 355 (2018), 1241-1258.
    [112] [ X. Yang and J. Lu, Finite-time synchronization of coupled networks with Markovian topology and impulsive effects, IEEE Transaction on Automatic Control, 61 (2016), 2256-2261.
    [113] [ X. Zeng, C. Li, T. Huang and X. He, Stability analysis of complex-valued impulsive systems with time delay, Applied Mathematics and Computation, 256 (2015), 75-82.
    [114] [ W. Zhang, Y. Tang, J. Fang and X. Wu, Stability of delayed neural networks with timevarying impulses, Neural Networks, 36 (2012), 59-63.
    [115] [ W. Zhang, Y. Tang, Q. Miao and W. Du, Exponential synchronization of coupled switched neural networks with mode-dependent impulsive effects, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 1316-1326.
    [116] [ X. Zhang and X. Li, Input-to-state stability of non-linear systems with distributed delayed impulses, IET Control Theory & Applications, 11 (2017), 81-89.
    [117] [ X. Zhang, X. Li, J. Cao and F. Miaadi, Design of delay-dependent controllers for finite-time stabilization of delayed neural networks with uncertainty, Journal of Franklin Institute, 355 (2018), 5394-5413.
    [118] [ X. Zhang, X. Lv and X. Li, Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control, Nonlinear Dynamics, 90 (2017), 2199-2207.
    [119] [ Y. Zhang and J. Sun, Stability of impulsive functional differential equations, Nonlinear Analysis: Theory, Methods and Applications, 68 (2008), 3665-3678.
    [120] [ Y. Zhang and J. Sun, Stability of impulsive infinite delay differential equations, Applied Mathematics Letters, 19 (2006), 1100-1106.
    [121] [ Y. Zheng, H. Li, X. Ding and Y. Liu, Stabilization and set stabilization of delayed Boolean control networks based on trajectory stabilization, Journal of the Franklin Institute, 354 (2017), 7812-7827.
    [122] [ Q. Zhu and X. Li, Exponential and almost sure exponential stability of stochastic fuzzy delayed Cohen-Grossberg neural networks, Fuzzy Sets and Systems, 203 (2012), 74-94.
    [123] [ Q. Zhu, F. Xi and X. Li, Robust exponential stability of stochastically nonlinear jump systems with mixed time delays, Journal of Optimization Theory and Applications, 154 (2012), 154-174.
    [124] [ M. Zou, A. Liu and Z. Zhang, Oscillation theorems for impulsive parabolic differential system of neutral type, Discrete & Continuous Dynamical Systems-B, 22 (2017), 2351-2363.
  • This article has been cited by:

    1. Zhenjun Tang, Hanyun Zhang, Shenglian Lu, Heng Yao, Xianquan Zhang, Robust image hashing with compressed sensing and ordinal measures, 2020, 2020, 1687-5281, 10.1186/s13640-020-00509-3
    2. Zhenjun Tang, Mengzhu Yu, Heng Yao, Hanyun Zhang, Chunqiang Yu, Xianquan Zhang, Robust Image Hashing With Singular Values Of Quaternion SVD, 2019, 0010-4620, 10.1093/comjnl/bxz127
    3. S. Qasim Abbas, Fawad Ahmed, Yi-Ping Phoebe Chen, Perceptual image hashing using transform domain noise resistant local binary pattern, 2021, 80, 1380-7501, 9849, 10.1007/s11042-020-10135-w
    4. Junlin Ouyang, Xiao Zhang, Xingzi Wen, Robust Hashing Based on Quaternion Gyrator Transform for Image Authentication, 2020, 8, 2169-3536, 220585, 10.1109/ACCESS.2020.3043111
    5. Xiaoping Liang, Zhenjun Tang, Xiaolan Xie, Jingli Wu, Xianquan Zhang, Robust and fast image hashing with two-dimensional PCA, 2020, 0942-4962, 10.1007/s00530-020-00696-z
    6. Xiaoran Yuan, Yan Zhao, Perceptual Image Hashing Based on Three-Dimensional Global Features and Image Energy, 2021, 9, 2169-3536, 49325, 10.1109/ACCESS.2021.3069045
    7. Rubel Biswas, Víctor González-Castro, Eduardo Fidalgo, Enrique Alegre, A new perceptual hashing method for verification and identity classification of occluded faces, 2021, 113, 02628856, 104245, 10.1016/j.imavis.2021.104245
    8. Toshiki Itagaki, Yuki Funabiki, Toru Akishita, 2021, Robust Image Hashing for Detecting Small Tampering Using a Hyperrectangular Region, 978-1-6654-1717-4, 1, 10.1109/WIFS53200.2021.9648383
    9. Mengzhu Yu, Zhenjun Tang, Zhixin Li, Xiaoping Liang, Xianquan Zhang, Robust Image Hashing With Saliency Map And Sparse Model, 2022, 0010-4620, 10.1093/comjnl/bxac010
    10. Mayank Srivastava, Jamshed Siddiqui, Mohd. Athar Ali, Image Copy Detection Based on Local Binary Pattern and SVM Classifier, 2020, 20, 1314-4081, 59, 10.2478/cait-2020-0016
    11. Xiaoping Liang, Zhenjun Tang, Sheng Li, Chunqiang Yu, Xianquan Zhang, A novel hashing scheme via image feature map and 2D PCA, 2022, 16, 1751-9659, 3225, 10.1049/ipr2.12555
    12. Satendra Pal Singh, Gaurav Bhatnagar, Robust and efficient hashing framework for industrial surveillance, 2022, 1868-5137, 10.1007/s12652-022-04408-5
    13. Tingting Lu, Liuping Feng, Qianxu Dong, 2022, A review of image authentication techniques based on deep hashing, 978-1-6654-9250-8, 78, 10.1109/ICCGIV57403.2022.00021
    14. Xiaoping Liang, Zhenjun Tang, Jingli Wu, Zhixin Li, Xinpeng Zhang, Robust Image Hashing With Isomap and Saliency Map for Copy Detection, 2023, 25, 1520-9210, 1085, 10.1109/TMM.2021.3139217
    15. Mengzhu Yu, Zhenjun Tang, Xianquan Zhang, Bineng Zhong, Xinpeng Zhang, Perceptual Hashing With Complementary Color Wavelet Transform and Compressed Sensing for Reduced-Reference Image Quality Assessment, 2022, 32, 1051-8215, 7559, 10.1109/TCSVT.2022.3190273
    16. Zixuan Yu, Zhenjun Tang, Xiaoping Liang, Hanyun Zhang, Ronghai Sun, Xianquan Zhang, A novel image hashing with low-rank sparse matrix decomposition and feature distance, 2024, 0178-2789, 10.1007/s00371-024-03517-w
    17. Ali Hassan, Perceptual image hashing by exploiting a visual attention model along with discrete wavelet transform and discrete cosine transform, 2024, 33, 1017-9909, 10.1117/1.JEI.33.6.063057
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7963) PDF downloads(1357) Cited by(163)

Figures and Tables

Figures(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog