
The accuracy of unknown parameters determines the accuracy of photovoltaic (PV) models that occupy an important position in the PV power generation system. Due to the complexity of the equation equivalent of PV models, estimating the parameters of the PV model is still an arduous task. In order to accurately and reliably estimate the unknown parameters in PV models, in this paper, an enhanced Rao-1 algorithm is proposed. The main point of enhancement lies in i) a repaired evolution operator is presented; ii) to prevent the Rao-1 algorithm from falling into a local optimum, a new evolution operator is developed; iii) in order to enable population size to change adaptively with the evolutionary process, the population size linear reduction strategy is employed. To verify the validity of ERao-1 algorithm, we embark a study on parameter estimation of three different PV models. Experimental results show that the proposed ERao-1 algorithm performs better than existing parameter estimation algorithms in terms of the accuracy and reliability, especially for the double diode model with RMSE 9.8248E-04, three diode model with RMSE 9.8257E-04 for the R.T.C France silicon cell, and 2.4251E-03 for the three diode model of Photowatt- PWP201 cell. In addition, the fitting curve of the simulated data and the measured data also shows the accuracy of the estimated parameters.
Citation: Junhua Ku, Shuijia Li, Wenyin Gong. Photovoltaic models parameter estimation via an enhanced Rao-1 algorithm[J]. Mathematical Biosciences and Engineering, 2022, 19(2): 1128-1153. doi: 10.3934/mbe.2022052
[1] | Sanaa El–Jamal, Houda Elfane, Imane Barakat, Khadija Sahel, Mohamed Mziwira, Aziz Fassouane, Rekia Belahsen . Association of socio-demographic and anthropometric characteristics with the management and glycemic control in type 1 diabetic children from the province of El Jadida (Morocco). AIMS Medical Science, 2021, 8(2): 87-104. doi: 10.3934/medsci.2021010 |
[2] | Sylvia Kirchengast . Diabetes and Obesity—An Evolutionary Perspective. AIMS Medical Science, 2017, 4(1): 28-51. doi: 10.3934/medsci.2017.1.28 |
[3] | Taha Gökmen Ülger, Muhittin Tayfur, Funda Pınar Çakıroğlu, Çiğdem Özcan . The role of duodenal jejunal bypass liner in obesity treatment. AIMS Medical Science, 2021, 8(3): 224-236. doi: 10.3934/medsci.2021019 |
[4] | Sylvia Kirchengast, Beda Hartmann . Maternal prepregnancy nutritional status influences newborn size and mode of delivery. AIMS Medical Science, 2018, 5(1): 53-66. doi: 10.3934/medsci.2018.1.53 |
[5] | Isaac Kofi Owusu, Emmanuel Acheamfour-Akowuah, Lois Amoah-Kumi, Yaw Amo Wiafe, Stephen Opoku, Enoch Odame Anto . The correlation between obesity and other cardiovascular disease risk factors among adult patients attending a specialist clinic in Kumasi. Ghana. AIMS Medical Science, 2023, 10(1): 24-36. doi: 10.3934/medsci.2023003 |
[6] | Masoud Nazemiyeh, Mehrzad Hajalilou, Mohsen Rajabnia, Akbar Sharifi, Sabah Hasani . Diagnostic value of Endothelin 1 as a marker for diagnosis of pulmonary parenchyma involvement in patients with systemic sclerosis. AIMS Medical Science, 2020, 7(3): 234-242. doi: 10.3934/medsci.2020014 |
[7] | Sanaa El-Jamal, Mohamed Mziwira, Houda Elfane, Khadija Sahel, Imane Barakat, Adil Kalili, Kaoutar Naciri, Nadia Mahri, Rachida Moustakim, Rachida El Ouafi, Loubna Arkoubi Idrissi, Rekia Belahsen . Association between food insecurity and obesity in an agricultural community of women from El Jadida, Morocco. AIMS Medical Science, 2021, 8(3): 175-188. doi: 10.3934/medsci.2021016 |
[8] | Milena T. Pelegrino, Amedea B. Seabra . Chitosan-Based Nanomaterials for Skin Regeneration. AIMS Medical Science, 2017, 4(3): 352-381. doi: 10.3934/medsci.2017.3.352 |
[9] | Claudia Francesca Oliva, Gloria Gangi, Silvia Marino, Lidia Marino, Giulia Messina, Sarah Sciuto, Giovanni Cacciaguerra, Mattia Comella, Raffaele Falsaperla, Piero Pavone . Single and in combination antiepileptic drug therapy in children with epilepsy: how to use it. AIMS Medical Science, 2021, 8(2): 138-146. doi: 10.3934/medsci.2021013 |
[10] | Klaus Greier, Clemens Drenowatz, Carla Greier, Gerhard Ruedl, Herbert Riechelmann . Sitting time in different contexts in Austrian adolescents and association with weight status. AIMS Medical Science, 2024, 11(2): 157-169. doi: 10.3934/medsci.2024013 |
The accuracy of unknown parameters determines the accuracy of photovoltaic (PV) models that occupy an important position in the PV power generation system. Due to the complexity of the equation equivalent of PV models, estimating the parameters of the PV model is still an arduous task. In order to accurately and reliably estimate the unknown parameters in PV models, in this paper, an enhanced Rao-1 algorithm is proposed. The main point of enhancement lies in i) a repaired evolution operator is presented; ii) to prevent the Rao-1 algorithm from falling into a local optimum, a new evolution operator is developed; iii) in order to enable population size to change adaptively with the evolutionary process, the population size linear reduction strategy is employed. To verify the validity of ERao-1 algorithm, we embark a study on parameter estimation of three different PV models. Experimental results show that the proposed ERao-1 algorithm performs better than existing parameter estimation algorithms in terms of the accuracy and reliability, especially for the double diode model with RMSE 9.8248E-04, three diode model with RMSE 9.8257E-04 for the R.T.C France silicon cell, and 2.4251E-03 for the three diode model of Photowatt- PWP201 cell. In addition, the fitting curve of the simulated data and the measured data also shows the accuracy of the estimated parameters.
Fire is a common natural disaster, which seriously endangers the safety of human life and property [1,2]. Traditional fire detection uses sensors such as smoke and temperature to monitor the changes of fire-related parameters in the environment [3,4,5]. However, due to the limitation of the detection range of sensors, the monitoring system can not cover a wide range of monitoring areas, and the traditional detection methods can not give valuable information of detected fires, such as fire scale and location information [6,7,8,9,10]. In recent years, with the popularization of intelligent monitoring equipment, the development of image processing technology, deep learning and intelligent optimization algorithms, the problem of fire monitoring based on video analysis has attracted more and more attention of researchers [11,12,13,14,15,16,17,18,19]. With the help of security monitoring, video-based fire detection is realized. It is a low-cost and high-efficiency fire detection scheme, which can greatly reduce casualties and property losses caused by fire.
Image-based fire detection technology is based on the characteristics of flame. CHEN et al. [20] studied flame irregularity detection in RGB and HSI color space. Fernandez et al. [21] proposed a method based on picture histogram to realize fire image recognition. Xu et al. [22] applied deep convolution neural network to fire image recognition, and achieved certain results. CELIK T and Demirel [23] designed classification rules based on the separation of chroma components and brightness in YCbCr space, but the rules have higher accuracy under larger flame sizes. Foggia et al. [24] combined flame color and dynamic characteristics to form a multi-dimensional flame recognition framework to realize fire detection. This method occupies a mainstream position in fire detection methods, but this method is still insufficient in the accuracy of fire recognition. Mueller et al. [2] studied the motion of rigid objects and the shape of flame, and proposed to extract the feature vector of flame through optical flow information and flame shape, so as to distinguish flame from other objects. With the continuous development of deep learning, Frizzi et al. [25] designed a fire identification algorithm based on convolutional neural network, which can classify fire and smoke. Fu et al. [26] used a 12-layer convolutional neural network to detect forest fires, and achieved good classification results, but it was not suitable for fire detection in real-time video because of its high computational complexity.
In daily life, most of the environmental information collected by safety monitoring equipment is non-fire environment, so most of the video streams transmitted by safety monitoring equipment are non-fire frames. If the non-fire frames and fire frames are not distinguished for detection, the time complexity of the algorithm will be greatly increased. To solve this problem, DeepFireNet proposed in this paper can filter the non-fire frames in the image preprocessing process with low time complexity, and transmit the images with possible fire to convolution network with slightly complicated calculation but high accuracy for fire detection. In this paper, based on the characteristics of fire, the video stream is pulled by OpenCV and a frame image in the current video stream is obtained. The frame image is Gaussian smoothed, and then combined with the static characteristics of fire color, a double color criterion based on RGB and HSI is established. The fire suspected area in the frame image is extracted by the color characteristics of fire. Then, it is further judged whether the extracted area is a fire area according to the dynamic characteristics of rapid growth of fire area. If the fire suspected area is detected, the suspected area is input into trained convolutional neural network for fire identification. If the suspected area is not detected, the next frame image is detected, and the convolution network is no longer called for secondary judgment, thus greatly reducing the computational complexity and maintaining high accuracy of fire identification. This method has a good performance in fire detection application in real-time video streaming environment.
In the process of image formation, transmission, reception and processing, due to the influence of the actual performance of equipment, there are inevitably external and internal interference, so various noises will be produced [27,28]. When a fire happens, it will also be affected by environmental noise such as weather and illumination, so the fire image should be smoothed and filtered before fire detection [29,30,31]. Commonly used methods include mean filtering [32,33], median filtering [34,35] and Gaussian filtering [36].
Mean filtering is simple in calculation and has a good effect on eliminating Gaussian noise, but it will destroy the edge details of the image in the process of eliminating noise. Median filtering performs well in eliminating random noise in images, while preserving the correlation of image texture. However, median filtering has high time complexity and is not suitable for image processing in real-time video. Gaussian filtering is a way of smoothing images by neighborhood averaging, in which pixels at different positions are given different weights. It is a classic way of smoothing images, and it is softer to process images. In order to solve the problem of smoothing a large number of images, this paper uses Gaussian smoothing filtering to realize image noise reduction. The effects of the three filtering processes are shown in Figure 1.
In the process of fire image recognition, it is necessary to extract the fire suspected areas to reduce the influence of complex background on fire recognition and improve the recognition accuracy. By judging the static and dynamic characteristics of fire, this paper realizes the accurate extraction of fire area [37,38].
(1) Fire static feature extraction
Color is the main static feature of fire. In this paper, the fire feature extraction based on color is realized by establishing RGB and HSI criterion models [39,40,41].
RGB model corresponds to three colors: red, green and blue. According to the principle of three primary colors, the amount of light is expressed in primary light units. In RGB color space, any color F can be expressed by adding and mixing different components of three primary colors R, G and B.
The HSI color model describes the color characteristics with three parameters: H, S and I, in which: H(Hue) indicates hue, which is used to indicate a certain range of colors, or to indicate the perception of different colors by human senses. S(Saturation), which indicates the purity of color, will become more vivid with the increase of saturation. Luminance I(Intensity), corresponding to imaging brightness and image gray scale. The establishment of HSI model is based on two important facts, one is that I component has nothing to do with the color information of the image, and the other is that H and S components are closely related to the way people feel the color. These characteristics make HSI model very suitable for color characteristic detection and analysis. RGB and HSI criteria are as follows.
{R>RTG>GTR>G>BS>0.2S>=((255−R)∗ST/RT | (1) |
In which R, G and B are color components in RGB color model, and S is color saturation in HSI model
It is inaccurate to detect the fire only according to the color characteristics of the fire, the interference sources such as candles, light sources and lighters in the room will also be mistaken for the fire because they have colors similar to the fire, thus causing interference to the identification process. As shown in Figure 3. To solve this problem, this paper proposes a method to extract the fire area by comprehensively using the static and dynamic features of fire. Firstly, the suspected areas close to the fire color are determined by the fire color features, and then the fire dynamic features of this area are identified, thus completing the extraction of the fire area.
(2) Fire dynamic feature extraction
The change of burning area during fire is one of the main manifestations of fire dynamic characteristics. In the initial stage of fire burning, the fire area increases rapidly, but the interference factors such as light source do not have the characteristics of rapid change of area. Therefore, in this paper, the dynamic features of fire are extracted by moving target monitoring technology [42,43].
At present, the commonly used moving target monitoring methods are optical flow field method, inter-frame difference method and background difference method. Optical flow field method [44] can be used in both moving and static scenes of cameras, but it is not suitable for real-time video processing because of its complicated calculation. Although the inter-frame difference method [45] is simple to implement, the extracted objects are easy to produce empty images. Compared with the inter-frame difference method, the complexity of the background difference method is slightly improved, but after many experiments, it is found that the algorithm meets the requirements of real-time video stream processing, and compared with the inter-frame difference method, it can obtain a more complete target image, which is beneficial to determine the fire area. Therefore, the background difference method is used to extract the dynamic characteristics of fire, as shown in Figure 4.
If only based on the dynamic characteristics of fire, the recognition effect will be affected by the movement of other objects in the monitoring environment. Therefore, in this paper, the static and dynamic characteristics of fire are comprehensively judged to complete the separation of fire area and background image, as shown in Figure 5.
Convolutional neural network (CNN) performs well in image recognition [46,47,48]. Through convolution network, the depth extraction of image features can be realized, and high-precision image recognition can be completed.
In this paper, Keras, a widely used deep learning framework, is used. CNN network initialization weight is based on the Inceptionv1-OnFire convolution network proposed by Andrew J. Dunnings et al. [49], Inceptionv1-OnFire provides a powerful initialization weight for the network to speed up convergence and avoid over-fitting on relatively small data sets. Compared with the nine linearly stacked inception modules in the InceptionV1 network [50], the network only uses three continuous inception modules, which greatly simplifies the complexity of the network architecture. Each Inception module uses the same convolution structure as InceptionV1, which consists of 1 × 1, 3 × 3, 5 × 5 convolution cores and 3 × 3 pooling layers. The front and back inputs and outputs of the three initiation modules adopt the same network architecture as that of the InceptionV1.
The main network architecture of this paper improves the inception module, and decomposes the convolution kernel of 5 × 5 into two 3 × 3. The receptive fields before and after decomposition are the same, and the representation ability of two 3 × 3 convolutions in series is stronger than that of a 5 × 5 convolution. The ratio of the parameters of two 3 × 3 convolutions and one 5 × 5 convolution is (9+9)/25, which reduces the parameters of the network by 28% and reduces the computation by 28% [51]. The network input image is a 3-channel fire image with a width and height of 224*224. The Layer inception correlation is shown in Figure 6 and the network structure is shown in Figure 7.
The VGG16 [52] network structure is used as the comparison network of the algorithm in this paper. The input image is a 3-channel fire image with a width and height of 224*224. The main network structure is VGG16, which stores the convolution layer and Max Pooling layer of VGG16 to realize feature extraction of input images. At the same time, two fully connected layers are added to receive the extracted features, thus realizing the classification and prediction of images. A Dropout layer is added between the last two fully connected layers to limit the number of participating neurons and reduce the occurrence of over-fitting. To solve the binary classification problem of fire identification, the optimizer uses RMSProp algorithm, the activation function uses sigmod, and the loss function uses Sigmod_cross_entropy_ with_logits to solve the logistic regression problem.
loss function:
loss=max(x,0)−x×z+log(1+exp(−abs(x))) | (2) |
In which,
The hardware platform of the algorithm is Intel(R) Core(TM) i5-7300HQ,A personal computer equipped with GTX 1060.
The training data set used in this paper comes from the public network fire picture data set and the public network video database, such as furg-fire-dataset (https://github.com/steffensbola/furg-fire-dataset), which used in [49]. About 10 500 fire pictures and 10 500 non-fire pictures are used in the experiment, mainly including fire and non-fire scenes in indoor and outdoor spaces such as offices, laboratories, kitchens, forests, streets, buildings and vehicles, so as to improve the generalization ability of convolution network. In the 21 000 pictures, 15 300 are used as training sets, 1 700 as verification sets and 10 videos (about 4 000 pictures) as test sets. After the training of the model, we use the model to verify in the user-defined dataset and furg-fire-dataset. The validation results are shown in Tables 1 and 2. Before the convolution network reads the picture, the fire area in the picture is extracted, and the extracted fire area is mirrored and reversed to expand the data set, and then the width and height of the picture are adjusted to 224*224 by cutting, classifying and normalizing, so as to make a fire data set.
Video name | Total video frames | Flame frame number | Non-flame frame number | TPR | FPR |
video1 | 358 | 304 | 54 | 97.3 | 4.3 |
video2 | 423 | 385 | 38 | 97.7 | 4.2 |
video3 | 285 | 274 | 11 | 96.8 | 4.6 |
video4 | 347 | 347 | 0 | 98.6 | 3.7 |
video5 | 355 | 312 | 43 | 97.6 | 4.2 |
video6 | 508 | 223 | 285 | 95.4 | 4.9 |
video7 | 278 | 52 | 226 | 95.7 | 4.7 |
video8 | 456 | 37 | 419 | 96.2 | 5.7 |
video9 | 362 | 8 | 354 | 95.4 | 5.6 |
video10 | 532 | 258 | 274 | 96.5 | 4.3 |
video11 | 630 | 630 | 0 | 97.8 | 4.3 |
video12 | 900 | 900 | 0 | 98.4 | 3.8 |
video13 | 900 | 690 | 210 | 97.6 | 4.1 |
video14 | 900 | 900 | 0 | 97.8 | 3.7 |
video15 | 900 | 855 | 45 | 97.4 | 4.6 |
video16 | 3600 | 0 | 3600 | 95.6 | 100.0 |
video17 | 600 | 600 | 0 | 97.4 | 3.6 |
video18 | 900 | 900 | 0 | 97.5 | 3.4 |
video19 | 900 | 900 | 0 | 97.4 | 3.3 |
video20 | 900 | 900 | 0 | 97.6 | 4.3 |
Algorithm | ACC/% | TPR/% | FPR/% | fps |
VGG16 | 90.28 | 96.52 | 11.63 | 2.0 |
AlexNet | 91.8 | 91.5 | 8.0 | 4.6 |
InceptionV1 | 93.58 | 95.23 | 9.4 | 2.6 |
InceptionV1-OnFire | 93.85 | 96.35 | 9.85 | 9.4 |
DeepFireNet(ours) | 96.86 | 97.42 | 4.36 | 40.0 |
Fire identification is a binary classification problem, so this paper uses ROC curve [53] and the total time needed to process the test video set as the performance index of the evaluation model.
True Positive Rate:
TPR=TPTP+FN | (3) |
False Positive Rate:
FPR=FPFP+TN | (4) |
Accuracy Rate:
ACC=TP+TNTP+FP+TN+FN | (5) |
In which,
In the training process, this paper adopts the method of 10-fold cross validation to train. The training samples were divided into 10 samples, and 9 samples were randomly selected for model training and 1 sample was used for model verification, and 10 experiments were carried out in cross. At the same time, for 9 model training sets, 20 images were taken as a batch and randomly divided into 3 400 batches, with 17 000 iterative trainings. The loss value in the process of network training is recorded. With the increase of iteration times, the loss value decreases steadily, and the accuracy rate is stable at 0.967, which generally meets the training requirements and achieves the learning purpose. Save the trained network model in h5 format, load the video test set by using OpenCV, an open source image processing library widely used, and simulate the real-time video stream collected by the camera. The test data set results are shown in Table 1 below. Video1–10 is part of the user-defined dataset sample, and video11–20 is part of the sample video in furg-fire-dataset. The algorithm shows high accuracy in the test set. The comparison of the time spent by the five methods on each test video set is shown in Figure 8.
The test data set in Table 2 consists of user-defined dataset and furg-fire-dataset. The resulting frames per second (fps) is shown in Table 2. When the input image is not test by convolution network and only uses the dynamic and static characteristics of fire to judge, the fps of the algorithm is 55. When only using convolution network detection, fps is 25. Considering that most of the images collected by the monitoring equipment in daily environment are non-fire images, the algorithm can only detect the dynamic and static characteristics of fire, so the algorithm can detect 55 frames of images per second in the vast majority of time, only when the suspected fire image is detected, the convolution network is used to judge. At this time, the FPS will drop, but it will still be much higher than the FPS as compared algorithms. From the results presented in Table 2, we observe significant run-time performance gains for the reduced complexity DeepFireNet and InceptionV1-OnFire architectures compared to their parent architectures. Experimental statistics show that VGG16 network is not suitable for real-time video detection, and the algorithm implemented in this paper is superior to the Inceptionv1-OnFire network in fire detection accuracy and time complexity. Although there are still false detections, the accuracy of fire identification has reached over 96%. Especially when there are a large number of non-fire frames in the video, the time complexity of the proposed algorithm is significantly reduced compared with VGG16 network and InceptionV1-OnFire network.
For real-time video, the static and dynamic characteristics of fire are used for initial judgment, so that a large number of video frames without interference sources are filtered out. When the suspected fire is detected, the system calls convolution network to detect the fire in the suspected fire area of the frame image for the second time This method can improve the accuracy of fire detection and simplify the computational complexity, and has a good effect in the process of real-time video processing. Comprehensive comparison shows that the method implemented in this paper has a good effect.
With the development of intelligent monitoring, it is of great significance to realize fire warning by monitoring equipment, so as to reduce casualties and property losses caused by fire. Compared with the traditional algorithm, this paper proposes a fire recognition algorithm with both recognition accuracy and low time complexity. The algorithm has a certain versatility and has a higher recognition rate for fires in different scenes.
According to the real-time requirement in the video stream processing process and the interference of other complex environments such as light sources and fast moving objects, In this paper, firstly, a fire static and dynamic feature detection algorithm with extremely low time complexity is used to extract the suspected fire area and filter a large number of non-fire images, and then input the detected fire suspected area into convolution network to complete the fire identification of this area.
The algorithm greatly reduces the time complexity by filtering a large number of non-fire images and improving the inception layer convolution network, and greatly reduces the interference of complex environment to the fire identification process by extracting the fire areas in the images, so that the convolution network only needs to focus on the identification of fire features, which effectively improves the accuracy of identification.
Because a large amount of smoke often appears when a fire occurs [54,55,56], in the following research, it is proposed to study the accurate detection of smoke generated when a fire occurs, so as to better ensure the timeliness of fire warning and the accuracy of fire warning in more complex environments.
This work is supported by the CERNET Innovation Project (No. NGII20190605), High Education Science and Technology Planning Program of Shandong Provincial Education Department (Grants No. J18KA340, J18KA385), Yantai Key Research and Development Program (Grants No. 2020YT06000970, 2019XDHZ081).
We have no conflict of interest in this paper.
[1] |
S. Li, W. Gong, Q. Gu, A comprehensive survey on meta-heuristic algorithms for parameter extraction of photovoltaic models, Renew. Sustain. Energy Rev., 141 (2021), 110828. doi: 10.1016/j.rser.2021.110828. doi: 10.1016/j.rser.2021.110828
![]() |
[2] |
G. Xiong, J. Zhang, D. Shi, Y. He, Parameter extraction of solar photovoltaic models using an improved whale optimization algorithm, Energy Convers. Manage., 174 (2018), 388–405. doi: 10.1016/j.enconman.2018.08.053. doi: 10.1016/j.enconman.2018.08.053
![]() |
[3] |
T. Ayodele, A. Ogunjuyigbe, E. Ekoh, Evaluation of numerical algorithms used in extracting the parameters of a single-diode photovoltaic model, Sustain. Energy Technol. Assess, 13 (2016), 51–59. doi: 10.1016/j.seta.2015.11.003. doi: 10.1016/j.seta.2015.11.003
![]() |
[4] |
T. Babu, J. Ram, K. Sangeetha, A. Laudani, N. Rajasekar, Parameter extraction of two diode solar pv model using fireworks algorithm, Sol. Energy, 140 (2016), 265–276. doi: 10.1016/j.solener.2016.10.044. doi: 10.1016/j.solener.2016.10.044
![]() |
[5] |
S. Li, W. Gong, X. Yan, C. Hu, D. Bai, L. Wang, Parameter estimation of photovoltaic models with memetic adaptive differential evolution, Sol. Energy, 190 (2019), 465–474. doi: 10.1016/j.solener.2019.08.022. doi: 10.1016/j.solener.2019.08.022
![]() |
[6] |
T. Easwarakhanthan, J. Bottin, I. Bouhouch, C. Boutrit, Nonlinear minimization algorithm for determining the solar cell parameters with microcomputers, Int. J. Sol. Energy, 4 (1986), 1–12. doi: 10.1080/01425918608909835. doi: 10.1080/01425918608909835
![]() |
[7] |
A. Conde, F. S ˊa nchez, J. Muci, New method to extract the model parameters of solar cells from the explicit analytic solutions of their illuminated characteristics, Sol. Energy Mater. Sol. Cells, 90 (2006), 352–361. doi: 10.1016/j.solmat.2005.04.023. doi: 10.1016/j.solmat.2005.04.023
![]() |
[8] |
R. Messaoud, Extraction of uncertain parameters of single-diode model of a photovoltaic panel using simulated annealing optimization, Energy Rep., 6 (2020), 350–357. doi: 10.1016/j.egyr.2020.01.016. doi: 10.1016/j.egyr.2020.01.016
![]() |
[9] |
M. AlHajri, K. Naggar, M. AlRashidi, A. Othman, Optimal extraction of solar cell parameters using pattern search, Renew. Energy, 44 (2012), 238–245. doi: 10.1016/j.renene.2012.01.082. doi: 10.1016/j.renene.2012.01.082
![]() |
[10] |
S. Ebrahimi, E. Salahshour, M. Malekzadeh, F. Gordillo, Parameters identification of pv solar cells and modules using flexible particle swarm optimization algorithm, Energy, 179 (2019), 358–372. doi: 10.1016/j.energy.2019.04.218. doi: 10.1016/j.energy.2019.04.218
![]() |
[11] |
S. Li, Q. Gu, W. Gong, B. Ning, An enhanced adaptive differential evolution algorithm for parameter extraction of photovoltaic models, Energy Convers. Manage., 205 (2020), 112443. doi: 10.1016/j.enconman.2019.112443. doi: 10.1016/j.enconman.2019.112443
![]() |
[12] |
Z. Yan, S. Li, W. Gong, An adaptive differential evolution with decomposition for photovoltaic parameter extraction, Math. Biosci. Eng., 18 (2021), 7363–7388. doi: 10.3934/mbe.2021364. doi: 10.3934/mbe.2021364
![]() |
[13] |
S. Li, W. Gong, L. Wang, X. Yan, C. Hu, A hybrid adaptive teaching-learning-based optimization and differential evolution for parameter identification of photovoltaic models, Energy Convers. Manage., 225 (2020), 113474. doi: 10.1016/j.enconman.2020.113474. doi: 10.1016/j.enconman.2020.113474
![]() |
[14] |
D. Oliva, M. Aziz, A. Hassanien, Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm, Appl. Energy, 200 (2017), 141–154. doi: 10.1016/j.apenergy.2017.05.029. doi: 10.1016/j.apenergy.2017.05.029
![]() |
[15] |
X. Chen, K. Yu, W. Du, W. Zhao, G. Liu, Parameters identification of solar cell models using generalized oppositional teaching learning based optimization, Energy, 99 (2016), 170–180. doi: 10.1016/j.energy.2016.01.052. doi: 10.1016/j.energy.2016.01.052
![]() |
[16] |
K. Yu, J. Liang, B. Qu, X. Chen, H. Wang, Parameters identification of photovoltaic models using an improved jaya optimization algorithm, Energy Convers. Manage., 150 (2017), 742–753. doi: 10.1016/j.enconman.2017.08.063. doi: 10.1016/j.enconman.2017.08.063
![]() |
[17] |
K. Yu, J. Liang, B. Qu, Z. Cheng, H. Wang, Multiple learning backtracking search algorithm for estimating parameters of photovoltaic models, Appl. Energy, 226 (2018), 408–422. doi: 10.1016/j.apenergy.2018.06.010. doi: 10.1016/j.apenergy.2018.06.010
![]() |
[18] |
X. Chen, B. Xu, C. Mei, Y. Ding, K. Li, Teaching-learning-based artificial bee colony for solar photovoltaic parameter estimation, Appl. Energy, 212 (2018), 1578–1588. doi: 10.1016/j.apenergy.2017.12.115. doi: 10.1016/j.apenergy.2017.12.115
![]() |
[19] |
K. Yu, B. Qu, C. Yue, S. Ge, X. Chen, J. Liang, A performance-guided jaya algorithm for parameters identification of photovoltaic cell and module, Appl. Energy, 237 (2019), 241–257. doi: 10.1016/j.apenergy.2019.01.008. doi: 10.1016/j.apenergy.2019.01.008
![]() |
[20] |
S. Li, W. Gong, X. Yan, C. Hu, D. Bai, L. Wang, et al., Parameter extraction of photovoltaic models using an improved teaching-learning-based optimization, Energy Convers. Manage., 186 (2019), 293–305. doi: 10.1016/j.enconman.2019.02.048. doi: 10.1016/j.enconman.2019.02.048
![]() |
[21] |
R. Rao, Rao algorithms: three metaphor-less simple algorithms for solving optimization problems, Int. J. Ind. Eng. Comput., 11 (2020), 107–130. doi: 10.5267/j.ijiec.2019.6.002. doi: 10.5267/j.ijiec.2019.6.002
![]() |
[22] |
R. Rao, R. Pawar, Constrained design optimization of selected mechanical system components using rao algorithms, Appl. Soft Comput., 89 (2020), 106141. doi: 10.1016/j.asoc.2020.106141. doi: 10.1016/j.asoc.2020.106141
![]() |
[23] |
M. Srikanth, N. Yadaiah, Analytical tuning rules for reduced-order active disturbance rejection control with fopdt models through multi-objective optimization and multi-criteria decision-making, ISA Trans., 114 (2021), 370–398. doi: 10.1016/j.isatra.2020.12.035. doi: 10.1016/j.isatra.2020.12.035
![]() |
[24] |
L. Wang, Z. Wang, H. Liang, C. Huang, Parameter estimation of photovoltaic cell model with rao-1 algorithm, Optik, 210 (2020), 163846. doi: 10.1016/j.ijleo.2019.163846. doi: 10.1016/j.ijleo.2019.163846
![]() |
[25] |
X. Jian, Y. Zhu, Parameters identification of photovoltaic models using modified rao-1 optimization algorithm, Optik, 231 (2021), 166439. doi: 10.1016/j.ijleo.2021.166439. doi: 10.1016/j.ijleo.2021.166439
![]() |
[26] |
M. Alrashidi, M. Alhajri, K. Elnaggar, A. Alothman, A new estimation approach for determining the i-v characteristics of solar cells, Sol. Energy, 85 (2011), 1543–1550. doi: 10.1016/j.solener.2011.04.013. doi: 10.1016/j.solener.2011.04.013
![]() |
[27] |
K. Naggar, M. AlRashidi, M. AlHajri, A. Othman, Simulated annealing algorithm for photovoltaic parameters identification, Sol. Energy, 86 (2012), 266–274. doi: 10.1016/j.solener.2011.09.032. doi: 10.1016/j.solener.2011.09.032
![]() |
[28] |
A. Askarzadeh, A. Rezazadeh, Parameter identification for solar cell models using harmony search-based algorithms, Sol. Energy, 86 (2012), 3241–3249. doi: 10.1016/j.solener.2012.08.018. doi: 10.1016/j.solener.2012.08.018
![]() |
[29] | W. Huang, C. Jiang, L. Xue, D. Song, Extracting solar cell model parameters based on chaos particle swarm algorithm, In 2011 International Conference on Electric Information and Control Engineering, pages 398–402, April 2011. doi: 10.1109/ICEICE.2011.5777246. |
[30] |
K. Ishaque, Z. Salam, S. Mekhilef, A. Shamsudin, Parameter extraction of solar photovoltaic modules using penalty-based differential evolution, Appl. Energy, 99 (2012), 297–308. doi: 10.1016/j.apenergy.2012.05.017. doi: 10.1016/j.apenergy.2012.05.017
![]() |
[31] |
H. Hasanien, Shuffled frog leaping algorithm for photovoltaic model identification, IEEE Trans. Sustain. Energy, 6 (2015), 509–515. doi: 10.1109/TSTE.2015.2389858. doi: 10.1109/TSTE.2015.2389858
![]() |
[32] |
J. Ram, T. Babu, T. Dragicevic, N. Rajasekar, A new hybrid bee pollinator flower pollination algorithm for solar pv parameter estimation, Energy Convers. Manage., 135 (2017), 463–476. doi: 10.1016/j.enconman.2016.12.082. doi: 10.1016/j.enconman.2016.12.082
![]() |
[33] |
K. Yu, X. Chen, X. Wang, Z. Wang, Parameters identification of photovoltaic models using self-adaptive teaching-learning-based optimization, Energy Convers. Manage., 145 (2017), 233–246. doi: 10.1016/j.enconman.2017.04.054. doi: 10.1016/j.enconman.2017.04.054
![]() |
[34] |
F. Zeng, H. Shu, J. Wang, Y. Chen, B. Yang, Parameter identification of pv cell via adaptive compass search algorithm, Energy Rep., 7 (2021), 275–282. doi: 10.1016/j.egyr.2021.01.069. doi: 10.1016/j.egyr.2021.01.069
![]() |
[35] |
G. Xiong, L. Li, A. Mohamed, X. Yuan, J. Zhang, A new method for parameter extraction of solar photovoltaic models using gaining sharing knowledge based algorithm, Energy Rep., 7 (2021), 3286–3301. doi: 10.1016/j.egyr.2021.05.030. doi: 10.1016/j.egyr.2021.05.030
![]() |
[36] |
W. Li, W. Gong, Differential evolution with quasi-reflection-based mutation, Math. Biosci. Eng., 18 (2021), 2425–2441. doi: 10.3934/MBE.2021123. doi: 10.3934/MBE.2021123
![]() |
[37] |
Q. Pang, X. Mi, J. Sun, H. Qin, Solving nonlinear equation systems via clustering-based adaptive speciation differential evolution, Math. Biosci. Eng., 18 (2021), 6034–6065. doi: 10.3934/MBE.2021302. doi: 10.3934/MBE.2021302
![]() |
[38] |
S. García, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric tests for analyzing the evolutionary algorithms behaviour: a case study on the cec 2005 special session on real parameter optimization, J. Heurist., 15 (2009), 617–644. doi: 10.1007/s10732-008-9080-4. doi: 10.1007/s10732-008-9080-4
![]() |
[39] |
L. Deotti, J. Pereira, I. J ˊe nior, Parameter extraction of photovoltaic models using an enhanced l ˊe vy flight bat algorithm, Energy Convers. Manage., 221 (2020), 113114. doi: 10.1016/j.enconman.2020.113114. doi: 10.1016/j.enconman.2020.113114
![]() |
[40] |
J. Liang, S. Ge, B. Qu, K. Yu, F. Liu, H. Yang, et al., Classified perturbation mutation based particle swarm optimization algorithm for parameters extraction of photovoltaic models, Energy Convers. Manage., 203 (2020), 112138. doi: 10.1016/j.enconman.2019.112138. doi: 10.1016/j.enconman.2019.112138
![]() |
[41] |
X. Lin, Y. Wu, Parameters identification of photovoltaic models using niche-based particle swarm optimization in parallel computing architecture, Energy, 196 (2020), 117054. doi: 10.1016/j.energy.2020.117054. doi: 10.1016/j.energy.2020.117054
![]() |
[42] |
M. Basset, R. Mohamed, S. Mirjalili, R. Chakrabortty, M. Ryan, Solar photovoltaic parameter estimation using an improved equilibrium optimizer, Sol. Energy, 209 (2020), 694–708. doi: 10.1016/j.solener.2020.09.032. doi: 10.1016/j.solener.2020.09.032
![]() |
[43] |
X. Yang, W. Gong, Opposition-based jaya with population reduction for parameter estimation of photovoltaic solar cells and modules, Appl. Soft Comput., 104 (2021), 107218. doi: 10.1016/j.asoc.2021.107218. doi: 10.1016/j.asoc.2021.107218
![]() |
[44] |
W. Long, T. Wu, M. Xu, M. Tang, S. Cai, Parameters identification of photovoltaic models by using an enhanced adaptive butterfly optimization algorithm, Energy, 229 (2021), 120750. doi: 10.1016/j.energy.2021.120750. doi: 10.1016/j.energy.2021.120750
![]() |
[45] |
Y. Liu, A. Heidari, X. Ye, C. Chi, X. Zhao, C. Ma, et al., Evolutionary shuffled frog leaping with memory pool for parameter optimization, Energy Rep., 7 (2021), 584–606. doi: 10.1016/j.egyr.2021.01.001. doi: 10.1016/j.egyr.2021.01.001
![]() |
[46] |
M. Basset, R. Mohamed, R. Chakrabortty, K. Sallam, M. Ryan, An efficient teaching-learning-based optimization algorithm for parameters identification of photovoltaic models: Analysis and validations, Energy Convers. Manage., 227 (2021), 113614. doi: 10.1016/j.enconman.2020.113614. doi: 10.1016/j.enconman.2020.113614
![]() |
[47] |
O. Hachana, B. Aoufi, G. Tina, M. Sid, Photovoltaic mono and bifacial module/string electrical model parameters identification and validation based on a new differential evolution bee colony optimizer, Energy Convers. Manage., 248 (2021), 114667. doi: 10.1016/j.enconman.2021.114667. doi: 10.1016/j.enconman.2021.114667
![]() |
[48] |
Y. Zhang, M. Ma, Z. Jin, Comprehensive learning jaya algorithm for parameter extraction of photovoltaic models, Energy, 211 (2020), 118644. doi: 10.1016/j.energy.2020.118644. doi: 10.1016/j.energy.2020.118644
![]() |
[49] |
Y. Zhang, M. Ma, Z. Jin, Backtracking search algorithm with competitive learning for identification of unknown parameters of photovoltaic systems, Expert Syst. Appl., 160 (2020), 113750. doi: 10.1016/j.eswa.2020.113750. doi: 10.1016/j.eswa.2020.113750
![]() |
[50] |
L. Tang, X. Wang, W. Xu, C. Mu, B. Zhao, Maximum power point tracking strategy for photovoltaic system based on fuzzy information diffusion under partial shading conditions, Sol. Energy, 220 (2021), 523–534. doi: 10.1016/j.solener.2021.03.047. doi: 10.1016/j.solener.2021.03.047
![]() |
[51] |
S. Li, W. Gong, L. Wang, X. Yan, C. Hu, Optimal power flow by means of improved adaptive differential evolution, Energy, 198 (2020), 117314. doi: 10.1016/j.energy.2020.117314. doi: 10.1016/j.energy.2020.117314
![]() |
[52] |
S. Li, W. Gong, C. Hu, X. Yan, L. Wang, Q. Gu, Adaptive constraint differential evolution for optimal power flow, Energy, 235 (2021), 121362. doi: 10.1016/j.energy.2021.121362. doi: 10.1016/j.energy.2021.121362
![]() |
[53] |
W. Gong, Z. Liao, X. Mi, L. Wang, Y. Guo, Nonlinear equations solving with intelligent optimization algorithms: a survey, Complex Syst. Model. Simul., 1 (2021), 15–32. doi: 10.23919/CSMS.2021.0002. doi: 10.23919/CSMS.2021.0002
![]() |
1. | Andrew S. Johnson, Gianluca Polese, Max Johnson, William Winlow, Appropriate Human Serum Albumin Fluid Therapy and the Alleviation of COVID-19 Vulnerabilities: An Explanation of the HSA Lymphatic Nutrient Pump, 2022, 2, 2673-8112, 1379, 10.3390/covid2100099 | |
2. | Sana Shams, Rakhee V Nair, Rathish T Pillai, Febin Kallan, Dermatosis among overweight and obese children attending a tertiary care centre - A prospective study, 2022, 8, 2581-4710, 192, 10.18231/j.ijced.2022.039 | |
3. | Ashley E. Weedn, Julie Benard, Sarah E. Hampl, Physical Examination and Evaluation for Comorbidities in Youth with Obesity, 2024, 71, 00313955, 859, 10.1016/j.pcl.2024.06.008 |
Video name | Total video frames | Flame frame number | Non-flame frame number | TPR | FPR |
video1 | 358 | 304 | 54 | 97.3 | 4.3 |
video2 | 423 | 385 | 38 | 97.7 | 4.2 |
video3 | 285 | 274 | 11 | 96.8 | 4.6 |
video4 | 347 | 347 | 0 | 98.6 | 3.7 |
video5 | 355 | 312 | 43 | 97.6 | 4.2 |
video6 | 508 | 223 | 285 | 95.4 | 4.9 |
video7 | 278 | 52 | 226 | 95.7 | 4.7 |
video8 | 456 | 37 | 419 | 96.2 | 5.7 |
video9 | 362 | 8 | 354 | 95.4 | 5.6 |
video10 | 532 | 258 | 274 | 96.5 | 4.3 |
video11 | 630 | 630 | 0 | 97.8 | 4.3 |
video12 | 900 | 900 | 0 | 98.4 | 3.8 |
video13 | 900 | 690 | 210 | 97.6 | 4.1 |
video14 | 900 | 900 | 0 | 97.8 | 3.7 |
video15 | 900 | 855 | 45 | 97.4 | 4.6 |
video16 | 3600 | 0 | 3600 | 95.6 | 100.0 |
video17 | 600 | 600 | 0 | 97.4 | 3.6 |
video18 | 900 | 900 | 0 | 97.5 | 3.4 |
video19 | 900 | 900 | 0 | 97.4 | 3.3 |
video20 | 900 | 900 | 0 | 97.6 | 4.3 |
Algorithm | ACC/% | TPR/% | FPR/% | fps |
VGG16 | 90.28 | 96.52 | 11.63 | 2.0 |
AlexNet | 91.8 | 91.5 | 8.0 | 4.6 |
InceptionV1 | 93.58 | 95.23 | 9.4 | 2.6 |
InceptionV1-OnFire | 93.85 | 96.35 | 9.85 | 9.4 |
DeepFireNet(ours) | 96.86 | 97.42 | 4.36 | 40.0 |
Video name | Total video frames | Flame frame number | Non-flame frame number | TPR | FPR |
video1 | 358 | 304 | 54 | 97.3 | 4.3 |
video2 | 423 | 385 | 38 | 97.7 | 4.2 |
video3 | 285 | 274 | 11 | 96.8 | 4.6 |
video4 | 347 | 347 | 0 | 98.6 | 3.7 |
video5 | 355 | 312 | 43 | 97.6 | 4.2 |
video6 | 508 | 223 | 285 | 95.4 | 4.9 |
video7 | 278 | 52 | 226 | 95.7 | 4.7 |
video8 | 456 | 37 | 419 | 96.2 | 5.7 |
video9 | 362 | 8 | 354 | 95.4 | 5.6 |
video10 | 532 | 258 | 274 | 96.5 | 4.3 |
video11 | 630 | 630 | 0 | 97.8 | 4.3 |
video12 | 900 | 900 | 0 | 98.4 | 3.8 |
video13 | 900 | 690 | 210 | 97.6 | 4.1 |
video14 | 900 | 900 | 0 | 97.8 | 3.7 |
video15 | 900 | 855 | 45 | 97.4 | 4.6 |
video16 | 3600 | 0 | 3600 | 95.6 | 100.0 |
video17 | 600 | 600 | 0 | 97.4 | 3.6 |
video18 | 900 | 900 | 0 | 97.5 | 3.4 |
video19 | 900 | 900 | 0 | 97.4 | 3.3 |
video20 | 900 | 900 | 0 | 97.6 | 4.3 |
Algorithm | ACC/% | TPR/% | FPR/% | fps |
VGG16 | 90.28 | 96.52 | 11.63 | 2.0 |
AlexNet | 91.8 | 91.5 | 8.0 | 4.6 |
InceptionV1 | 93.58 | 95.23 | 9.4 | 2.6 |
InceptionV1-OnFire | 93.85 | 96.35 | 9.85 | 9.4 |
DeepFireNet(ours) | 96.86 | 97.42 | 4.36 | 40.0 |