Research article

DeepFireNet: A real-time video fire detection method based on multi-feature fusion

  • Received: 04 August 2020 Accepted: 25 October 2020 Published: 09 November 2020
  • This paper proposes a real-time fire detection framework DeepFireNet that combines fire features and convolutional neural networks, which can be used to detect real-time video collected by monitoring equipment. DeepFireNet takes surveillance device video stream as input. To begin with, based on the static and dynamic characteristics of fire, a large number of non-fire images in the video stream are filtered. In the process, for the fire images in the video stream, the suspected fire area in the image is extracted. Eliminate the influence of light sources, candles and other interference sources to reduce the interference of complex environments on fire detection. Then, the algorithm encodes the extracted region and inputs it into DeepFireNet convolution network, which extracts the depth feature of the image and finally judges whether there is a fire in the image. DeepFireNet network replaces 5×5 convolution kernels in the inception layer with two 3×3 convolution kernels, and only uses three improved inception layers as the core architecture of the network, which effectively reduces the network parameters and significantly reduces the amount of computation. The experimental results show that this method can be applied to many different indoor and outdoor scenes. Besides, the algorithm effectively meets the requirements for the accuracy and real-time of the detection algorithm in the process of real-time video detection. This method has good practicability.

    Citation: Bin Zhang, Linkun Sun, Yingjie Song, Weiping Shao, Yan Guo, Fang Yuan. DeepFireNet: A real-time video fire detection method based on multi-feature fusion[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 7804-7818. doi: 10.3934/mbe.2020397

    Related Papers:

    [1] Siyuan Shen, Xing Zhang, Wenjing Yan, Shuqian Xie, Bingjia Yu, Shizhi Wang . An improved UAV target detection algorithm based on ASFF-YOLOv5s. Mathematical Biosciences and Engineering, 2023, 20(6): 10773-10789. doi: 10.3934/mbe.2023478
    [2] Yang Pan, Jinhua Yang, Lei Zhu, Lina Yao, Bo Zhang . Aerial images object detection method based on cross-scale multi-feature fusion. Mathematical Biosciences and Engineering, 2023, 20(9): 16148-16168. doi: 10.3934/mbe.2023721
    [3] Yong Tian, Tian Zhang, Qingchao Zhang, Yong Li, Zhaodong Wang . Feature fusion–based preprocessing for steel plate surface defect recognition. Mathematical Biosciences and Engineering, 2020, 17(5): 5672-5685. doi: 10.3934/mbe.2020305
    [4] Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
    [5] Cong Lin, Yiquan Huang, Wenling Wang, Siling Feng, Mengxing Huang . Lesion detection of chest X-Ray based on scalable attention residual CNN. Mathematical Biosciences and Engineering, 2023, 20(2): 1730-1749. doi: 10.3934/mbe.2023079
    [6] Yafeng Zhao, Xuan Gao, Junfeng Hu, Zhen Chen . Tree species identification based on the fusion of bark and leaves. Mathematical Biosciences and Engineering, 2020, 17(4): 4018-4033. doi: 10.3934/mbe.2020222
    [7] Jiaming Ding, Peigang Jiao, Kangning Li, Weibo Du . Road surface crack detection based on improved YOLOv5s. Mathematical Biosciences and Engineering, 2024, 21(3): 4269-4285. doi: 10.3934/mbe.2024188
    [8] Xiaoguang Liu, Meng Chen, Tie Liang, Cunguang Lou, Hongrui Wang, Xiuling Liu . A lightweight double-channel depthwise separable convolutional neural network for multimodal fusion gait recognition. Mathematical Biosciences and Engineering, 2022, 19(2): 1195-1212. doi: 10.3934/mbe.2022055
    [9] Xue Li, Huibo Zhou, Ming Zhao . Transformer-based cascade networks with spatial and channel reconstruction convolution for deepfake detection. Mathematical Biosciences and Engineering, 2024, 21(3): 4142-4164. doi: 10.3934/mbe.2024183
    [10] Jin Wang, Liping Wang, Ruiqing Wang . MFFLR-DDoS: An encrypted LR-DDoS attack detection method based on multi-granularity feature fusions in SDN. Mathematical Biosciences and Engineering, 2024, 21(3): 4187-4209. doi: 10.3934/mbe.2024185
  • This paper proposes a real-time fire detection framework DeepFireNet that combines fire features and convolutional neural networks, which can be used to detect real-time video collected by monitoring equipment. DeepFireNet takes surveillance device video stream as input. To begin with, based on the static and dynamic characteristics of fire, a large number of non-fire images in the video stream are filtered. In the process, for the fire images in the video stream, the suspected fire area in the image is extracted. Eliminate the influence of light sources, candles and other interference sources to reduce the interference of complex environments on fire detection. Then, the algorithm encodes the extracted region and inputs it into DeepFireNet convolution network, which extracts the depth feature of the image and finally judges whether there is a fire in the image. DeepFireNet network replaces 5×5 convolution kernels in the inception layer with two 3×3 convolution kernels, and only uses three improved inception layers as the core architecture of the network, which effectively reduces the network parameters and significantly reduces the amount of computation. The experimental results show that this method can be applied to many different indoor and outdoor scenes. Besides, the algorithm effectively meets the requirements for the accuracy and real-time of the detection algorithm in the process of real-time video detection. This method has good practicability.


    Fire is a common natural disaster, which seriously endangers the safety of human life and property [1,2]. Traditional fire detection uses sensors such as smoke and temperature to monitor the changes of fire-related parameters in the environment [3,4,5]. However, due to the limitation of the detection range of sensors, the monitoring system can not cover a wide range of monitoring areas, and the traditional detection methods can not give valuable information of detected fires, such as fire scale and location information [6,7,8,9,10]. In recent years, with the popularization of intelligent monitoring equipment, the development of image processing technology, deep learning and intelligent optimization algorithms, the problem of fire monitoring based on video analysis has attracted more and more attention of researchers [11,12,13,14,15,16,17,18,19]. With the help of security monitoring, video-based fire detection is realized. It is a low-cost and high-efficiency fire detection scheme, which can greatly reduce casualties and property losses caused by fire.

    Image-based fire detection technology is based on the characteristics of flame. CHEN et al. [20] studied flame irregularity detection in RGB and HSI color space. Fernandez et al. [21] proposed a method based on picture histogram to realize fire image recognition. Xu et al. [22] applied deep convolution neural network to fire image recognition, and achieved certain results. CELIK T and Demirel [23] designed classification rules based on the separation of chroma components and brightness in YCbCr space, but the rules have higher accuracy under larger flame sizes. Foggia et al. [24] combined flame color and dynamic characteristics to form a multi-dimensional flame recognition framework to realize fire detection. This method occupies a mainstream position in fire detection methods, but this method is still insufficient in the accuracy of fire recognition. Mueller et al. [2] studied the motion of rigid objects and the shape of flame, and proposed to extract the feature vector of flame through optical flow information and flame shape, so as to distinguish flame from other objects. With the continuous development of deep learning, Frizzi et al. [25] designed a fire identification algorithm based on convolutional neural network, which can classify fire and smoke. Fu et al. [26] used a 12-layer convolutional neural network to detect forest fires, and achieved good classification results, but it was not suitable for fire detection in real-time video because of its high computational complexity.

    In daily life, most of the environmental information collected by safety monitoring equipment is non-fire environment, so most of the video streams transmitted by safety monitoring equipment are non-fire frames. If the non-fire frames and fire frames are not distinguished for detection, the time complexity of the algorithm will be greatly increased. To solve this problem, DeepFireNet proposed in this paper can filter the non-fire frames in the image preprocessing process with low time complexity, and transmit the images with possible fire to convolution network with slightly complicated calculation but high accuracy for fire detection. In this paper, based on the characteristics of fire, the video stream is pulled by OpenCV and a frame image in the current video stream is obtained. The frame image is Gaussian smoothed, and then combined with the static characteristics of fire color, a double color criterion based on RGB and HSI is established. The fire suspected area in the frame image is extracted by the color characteristics of fire. Then, it is further judged whether the extracted area is a fire area according to the dynamic characteristics of rapid growth of fire area. If the fire suspected area is detected, the suspected area is input into trained convolutional neural network for fire identification. If the suspected area is not detected, the next frame image is detected, and the convolution network is no longer called for secondary judgment, thus greatly reducing the computational complexity and maintaining high accuracy of fire identification. This method has a good performance in fire detection application in real-time video streaming environment.

    In the process of image formation, transmission, reception and processing, due to the influence of the actual performance of equipment, there are inevitably external and internal interference, so various noises will be produced [27,28]. When a fire happens, it will also be affected by environmental noise such as weather and illumination, so the fire image should be smoothed and filtered before fire detection [29,30,31]. Commonly used methods include mean filtering [32,33], median filtering [34,35] and Gaussian filtering [36].

    Mean filtering is simple in calculation and has a good effect on eliminating Gaussian noise, but it will destroy the edge details of the image in the process of eliminating noise. Median filtering performs well in eliminating random noise in images, while preserving the correlation of image texture. However, median filtering has high time complexity and is not suitable for image processing in real-time video. Gaussian filtering is a way of smoothing images by neighborhood averaging, in which pixels at different positions are given different weights. It is a classic way of smoothing images, and it is softer to process images. In order to solve the problem of smoothing a large number of images, this paper uses Gaussian smoothing filtering to realize image noise reduction. The effects of the three filtering processes are shown in Figure 1.

    Figure 1.  Comparison of filtering effects.

    In the process of fire image recognition, it is necessary to extract the fire suspected areas to reduce the influence of complex background on fire recognition and improve the recognition accuracy. By judging the static and dynamic characteristics of fire, this paper realizes the accurate extraction of fire area [37,38].

    (1) Fire static feature extraction

    Color is the main static feature of fire. In this paper, the fire feature extraction based on color is realized by establishing RGB and HSI criterion models [39,40,41].

    RGB model corresponds to three colors: red, green and blue. According to the principle of three primary colors, the amount of light is expressed in primary light units. In RGB color space, any color F can be expressed by adding and mixing different components of three primary colors R, G and B.

    The HSI color model describes the color characteristics with three parameters: H, S and I, in which: H(Hue) indicates hue, which is used to indicate a certain range of colors, or to indicate the perception of different colors by human senses. S(Saturation), which indicates the purity of color, will become more vivid with the increase of saturation. Luminance I(Intensity), corresponding to imaging brightness and image gray scale. The establishment of HSI model is based on two important facts, one is that I component has nothing to do with the color information of the image, and the other is that H and S components are closely related to the way people feel the color. These characteristics make HSI model very suitable for color characteristic detection and analysis. RGB and HSI criteria are as follows.

    {R>RTG>GTR>G>BS>0.2S>=((255R)ST/RT (1)

    In which R, G and B are color components in RGB color model, and S is color saturation in HSI model RT is the threshold of the R color component, GT is the threshold of the G color component, ST is the threshold of the S color component. According to many experiments, the value range of RT is between 115 and 135, the value range of GT is between 115 and 135, the value range of ST is between 55 and 65. The realization effect is shown in Figure 2.

    Figure 2.  No interference source.

    It is inaccurate to detect the fire only according to the color characteristics of the fire, the interference sources such as candles, light sources and lighters in the room will also be mistaken for the fire because they have colors similar to the fire, thus causing interference to the identification process. As shown in Figure 3. To solve this problem, this paper proposes a method to extract the fire area by comprehensively using the static and dynamic features of fire. Firstly, the suspected areas close to the fire color are determined by the fire color features, and then the fire dynamic features of this area are identified, thus completing the extraction of the fire area.

    Figure 3.  There are interference sources.

    (2) Fire dynamic feature extraction

    The change of burning area during fire is one of the main manifestations of fire dynamic characteristics. In the initial stage of fire burning, the fire area increases rapidly, but the interference factors such as light source do not have the characteristics of rapid change of area. Therefore, in this paper, the dynamic features of fire are extracted by moving target monitoring technology [42,43].

    At present, the commonly used moving target monitoring methods are optical flow field method, inter-frame difference method and background difference method. Optical flow field method [44] can be used in both moving and static scenes of cameras, but it is not suitable for real-time video processing because of its complicated calculation. Although the inter-frame difference method [45] is simple to implement, the extracted objects are easy to produce empty images. Compared with the inter-frame difference method, the complexity of the background difference method is slightly improved, but after many experiments, it is found that the algorithm meets the requirements of real-time video stream processing, and compared with the inter-frame difference method, it can obtain a more complete target image, which is beneficial to determine the fire area. Therefore, the background difference method is used to extract the dynamic characteristics of fire, as shown in Figure 4.

    Figure 4.  Fire extraction based on dynamic features.

    If only based on the dynamic characteristics of fire, the recognition effect will be affected by the movement of other objects in the monitoring environment. Therefore, in this paper, the static and dynamic characteristics of fire are comprehensively judged to complete the separation of fire area and background image, as shown in Figure 5.

    Figure 5.  Fire extraction based on multiple features.

    Convolutional neural network (CNN) performs well in image recognition [46,47,48]. Through convolution network, the depth extraction of image features can be realized, and high-precision image recognition can be completed.

    In this paper, Keras, a widely used deep learning framework, is used. CNN network initialization weight is based on the Inceptionv1-OnFire convolution network proposed by Andrew J. Dunnings et al. [49], Inceptionv1-OnFire provides a powerful initialization weight for the network to speed up convergence and avoid over-fitting on relatively small data sets. Compared with the nine linearly stacked inception modules in the InceptionV1 network [50], the network only uses three continuous inception modules, which greatly simplifies the complexity of the network architecture. Each Inception module uses the same convolution structure as InceptionV1, which consists of 1 × 1, 3 × 3, 5 × 5 convolution cores and 3 × 3 pooling layers. The front and back inputs and outputs of the three initiation modules adopt the same network architecture as that of the InceptionV1.

    The main network architecture of this paper improves the inception module, and decomposes the convolution kernel of 5 × 5 into two 3 × 3. The receptive fields before and after decomposition are the same, and the representation ability of two 3 × 3 convolutions in series is stronger than that of a 5 × 5 convolution. The ratio of the parameters of two 3 × 3 convolutions and one 5 × 5 convolution is (9+9)/25, which reduces the parameters of the network by 28% and reduces the computation by 28% [51]. The network input image is a 3-channel fire image with a width and height of 224*224. The Layer inception correlation is shown in Figure 6 and the network structure is shown in Figure 7.

    Figure 6.  Layer inception correlation.
    Figure 7.  DeepFireNet model diagram.

    The VGG16 [52] network structure is used as the comparison network of the algorithm in this paper. The input image is a 3-channel fire image with a width and height of 224*224. The main network structure is VGG16, which stores the convolution layer and Max Pooling layer of VGG16 to realize feature extraction of input images. At the same time, two fully connected layers are added to receive the extracted features, thus realizing the classification and prediction of images. A Dropout layer is added between the last two fully connected layers to limit the number of participating neurons and reduce the occurrence of over-fitting. To solve the binary classification problem of fire identification, the optimizer uses RMSProp algorithm, the activation function uses sigmod, and the loss function uses Sigmod_cross_entropy_ with_logits to solve the logistic regression problem.

    loss function:

    loss=max(x,0)x×z+log(1+exp(abs(x))) (2)

    In which, x represents the predicted value and z represents the label value.

    The hardware platform of the algorithm is Intel(R) Core(TM) i5-7300HQ,A personal computer equipped with GTX 1060.

    The training data set used in this paper comes from the public network fire picture data set and the public network video database, such as furg-fire-dataset (https://github.com/steffensbola/furg-fire-dataset), which used in [49]. About 10 500 fire pictures and 10 500 non-fire pictures are used in the experiment, mainly including fire and non-fire scenes in indoor and outdoor spaces such as offices, laboratories, kitchens, forests, streets, buildings and vehicles, so as to improve the generalization ability of convolution network. In the 21 000 pictures, 15 300 are used as training sets, 1 700 as verification sets and 10 videos (about 4 000 pictures) as test sets. After the training of the model, we use the model to verify in the user-defined dataset and furg-fire-dataset. The validation results are shown in Tables 1 and 2. Before the convolution network reads the picture, the fire area in the picture is extracted, and the extracted fire area is mirrored and reversed to expand the data set, and then the width and height of the picture are adjusted to 224*224 by cutting, classifying and normalizing, so as to make a fire data set.

    Table 1.  Test results of test set.
    Video name Total video frames Flame frame number Non-flame frame number TPR FPR
    video1 358 304 54 97.3 4.3
    video2 423 385 38 97.7 4.2
    video3 285 274 11 96.8 4.6
    video4 347 347 0 98.6 3.7
    video5 355 312 43 97.6 4.2
    video6 508 223 285 95.4 4.9
    video7 278 52 226 95.7 4.7
    video8 456 37 419 96.2 5.7
    video9 362 8 354 95.4 5.6
    video10 532 258 274 96.5 4.3
    video11 630 630 0 97.8 4.3
    video12 900 900 0 98.4 3.8
    video13 900 690 210 97.6 4.1
    video14 900 900 0 97.8 3.7
    video15 900 855 45 97.4 4.6
    video16 3600 0 3600 95.6 100.0
    video17 600 600 0 97.4 3.6
    video18 900 900 0 97.5 3.4
    video19 900 900 0 97.4 3.3
    video20 900 900 0 97.6 4.3

     | Show Table
    DownLoad: CSV
    Table 2.  Performance index values of three algorithms.
    Algorithm ACC/% TPR/% FPR/% fps
    VGG16 90.28 96.52 11.63 2.0
    AlexNet 91.8 91.5 8.0 4.6
    InceptionV1 93.58 95.23 9.4 2.6
    InceptionV1-OnFire 93.85 96.35 9.85 9.4
    DeepFireNet(ours) 96.86 97.42 4.36 40.0

     | Show Table
    DownLoad: CSV

    Fire identification is a binary classification problem, so this paper uses ROC curve [53] and the total time needed to process the test video set as the performance index of the evaluation model.

    True Positive Rate:

    TPR=TPTP+FN (3)

    False Positive Rate:

    FPR=FPFP+TN (4)

    Accuracy Rate:

    ACC=TP+TNTP+FP+TN+FN (5)

    In which, TP indicates the number of fire images correctly identified as fires, FP indicates the number of non-fire images incorrectly classified as fire images, TN indicates the number of non-fire images correctly identified as non-fire, and FN indicates the number of fire images incorrectly identified as non-fire images.

    In the training process, this paper adopts the method of 10-fold cross validation to train. The training samples were divided into 10 samples, and 9 samples were randomly selected for model training and 1 sample was used for model verification, and 10 experiments were carried out in cross. At the same time, for 9 model training sets, 20 images were taken as a batch and randomly divided into 3 400 batches, with 17 000 iterative trainings. The loss value in the process of network training is recorded. With the increase of iteration times, the loss value decreases steadily, and the accuracy rate is stable at 0.967, which generally meets the training requirements and achieves the learning purpose. Save the trained network model in h5 format, load the video test set by using OpenCV, an open source image processing library widely used, and simulate the real-time video stream collected by the camera. The test data set results are shown in Table 1 below. Video1–10 is part of the user-defined dataset sample, and video11–20 is part of the sample video in furg-fire-dataset. The algorithm shows high accuracy in the test set. The comparison of the time spent by the five methods on each test video set is shown in Figure 8.

    Figure 8.  Comparison of processing time of algorithms on test sets.

    The test data set in Table 2 consists of user-defined dataset and furg-fire-dataset. The resulting frames per second (fps) is shown in Table 2. When the input image is not test by convolution network and only uses the dynamic and static characteristics of fire to judge, the fps of the algorithm is 55. When only using convolution network detection, fps is 25. Considering that most of the images collected by the monitoring equipment in daily environment are non-fire images, the algorithm can only detect the dynamic and static characteristics of fire, so the algorithm can detect 55 frames of images per second in the vast majority of time, only when the suspected fire image is detected, the convolution network is used to judge. At this time, the FPS will drop, but it will still be much higher than the FPS as compared algorithms. From the results presented in Table 2, we observe significant run-time performance gains for the reduced complexity DeepFireNet and InceptionV1-OnFire architectures compared to their parent architectures. Experimental statistics show that VGG16 network is not suitable for real-time video detection, and the algorithm implemented in this paper is superior to the Inceptionv1-OnFire network in fire detection accuracy and time complexity. Although there are still false detections, the accuracy of fire identification has reached over 96%. Especially when there are a large number of non-fire frames in the video, the time complexity of the proposed algorithm is significantly reduced compared with VGG16 network and InceptionV1-OnFire network.

    For real-time video, the static and dynamic characteristics of fire are used for initial judgment, so that a large number of video frames without interference sources are filtered out. When the suspected fire is detected, the system calls convolution network to detect the fire in the suspected fire area of the frame image for the second time This method can improve the accuracy of fire detection and simplify the computational complexity, and has a good effect in the process of real-time video processing. Comprehensive comparison shows that the method implemented in this paper has a good effect.

    Figure 9.  Test set video picture example.

    With the development of intelligent monitoring, it is of great significance to realize fire warning by monitoring equipment, so as to reduce casualties and property losses caused by fire. Compared with the traditional algorithm, this paper proposes a fire recognition algorithm with both recognition accuracy and low time complexity. The algorithm has a certain versatility and has a higher recognition rate for fires in different scenes.

    According to the real-time requirement in the video stream processing process and the interference of other complex environments such as light sources and fast moving objects, In this paper, firstly, a fire static and dynamic feature detection algorithm with extremely low time complexity is used to extract the suspected fire area and filter a large number of non-fire images, and then input the detected fire suspected area into convolution network to complete the fire identification of this area.

    The algorithm greatly reduces the time complexity by filtering a large number of non-fire images and improving the inception layer convolution network, and greatly reduces the interference of complex environment to the fire identification process by extracting the fire areas in the images, so that the convolution network only needs to focus on the identification of fire features, which effectively improves the accuracy of identification.

    Because a large amount of smoke often appears when a fire occurs [54,55,56], in the following research, it is proposed to study the accurate detection of smoke generated when a fire occurs, so as to better ensure the timeliness of fire warning and the accuracy of fire warning in more complex environments.

    This work is supported by the CERNET Innovation Project (No. NGII20190605), High Education Science and Technology Planning Program of Shandong Provincial Education Department (Grants No. J18KA340, J18KA385), Yantai Key Research and Development Program (Grants No. 2020YT06000970, 2019XDHZ081).

    We have no conflict of interest in this paper.



    [1] B. U. Toreyin, Y. Dedeoglu, A. E. Cetin, Flame detection in video using hidden Markov models, IEEE International Conference on Image Processing, 2 (2005), II-1230.
    [2] O. Gunay, K. Taşdemir, B. U. Toreyin, A. E. Çetin, Fire detection in video using LMS based active learning, Fire Technol., 46 (2010), 551-577.
    [3] H. Zhao, S. Zuo, M. Hou, W. Liu, L. Yu, X. Yang, et al., A novel adaptive signal processing method based on enhanced empirical wavelet transform technology, Sensors, 18 (2018), 3323. doi: 10.3390/s18103323
    [4] J. Y. Sun, S. Y. Qi, Design in Fire Prevention Based on Multi-Sensor and WSN, Appl. Mech. Mater., 713 (2015), 2237-2240.
    [5] J. Sun, H. Jin, Intelligent design in fire prevention based on WSN, 2011 International Conference on Uncertainty Reasoning and Knowledge Engineering. IEEE, 2 (2011), 169-172.
    [6] W. Deng, J. Xu, Y. Song, H. Zhao, Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem, Appl. Soft Comput., 2020 (2020), 106724.
    [7] B. Ko, K. H. Cheong, J. Y. Nam, Early fire detection algorithm based on irregular patterns of flames and hierarchical Bayesian Networks, Fire Saf. J., 45 (2010), 262-270. doi: 10.1016/j.firesaf.2010.04.001
    [8] M. Mueller, P. Karasev, I. Kolesov, A. Tannenbaum, Optical flow estimation for flame detection in videos, IEEE Trans. Image Process., 22 (2013), 2786-2797. doi: 10.1109/TIP.2013.2258353
    [9] H. Zhao, J. Zheng, W. Deng, Y. Song, Semi-supervised broad learning system based on manifold regularization and broad network, IEEE Trans. Circuits Syst., 67 (2020), 983-994. doi: 10.1109/TCSI.2019.2959886
    [10] W. Deng, H. Liu, J. Xu, H. Zhao, Y. Song, An improved quantum-inspired differential evolution algorithm for deep belief network, IEEE Trans. Instrum. Meas., 69 (2020), 7319-7327. doi: 10.1109/TIM.2020.2983233
    [11] Y. Liu, Y. Mu, K. Chen, Y. Li, J. Guo, Daily activity feature selection in smart homes based on pearson correlation coefficient, Neural Process. Lett., 51 (2020), 1771-1787. doi: 10.1007/s11063-019-10185-8
    [12] R. Chen, S. K. Guo, X. Z. Wang, T. L. Zhang, Fusion of multi-RSMOTE with fuzzy integral to classify bug reports with an imbalanced distribution, IEEE Trans. Fuzzy Syst., 27 (2019), 2406-2420. doi: 10.1109/TFUZZ.2019.2899809
    [13] W. Deng, H. Liu, J. Xu, H. Zhao, Y. Song, An improved quantum-inspired differential evolution algorithm for deep belief network, IEEE Trans. Instrum. Meas., 69 (2020), 7319-7327. doi: 10.1109/TIM.2020.2983233
    [14] Y. Xu, H. Chen, J. Luo, Q. Zhang, S. Jiao, X. Zhang, Enhanced Moth-flame optimizer with mutation strategy for global optimization, Inf. Sci., 492 (2019), 181-203. doi: 10.1016/j.ins.2019.04.022
    [15] Y. Liu, X. Wang, Z. Zhai, R. Chen, Y. Jiang, Timely daily activity recognition from headmost sensor events, ISA Trans., 94 (2019), 379-390. doi: 10.1016/j.isatra.2019.04.026
    [16] W. Deng, J. Xu, H. Zhao, Y. Song, A novel gate resource allocation method using improved PSO-based QEA, IEEE Trans. Intell. Transp. Syst., 2020 (2020), 1-9.
    [17] H. Chen, A. A. Heidari, H. Chen, M. Wang, Z. Pan, A. H. Gandomi, Multi-population differential evolution-assisted Harris hawks optimization: Framework and case studies, Future Gener. Comput. Syst., 111 (2020), 175-198. doi: 10.1016/j.future.2020.04.008
    [18] Y. Xue, B. Xue, M. Zhang, Self-adaptive particle swarm optimization for large-scale feature selection in classification, ACM Trans. Knowl. Discovery Data, 13 (2019), 1-27.
    [19] W. Deng, J. Xu, Y. Song, H. Zhao, An effective improved co-evolution ant colony optimization algorithm with multi-strategies and its application, Int. J. Bio-Inspired Comput., 2019 (2019), 1-10.
    [20] T. H. Chen, P. H. Wu, Y. C. Chiou, An early fire-detection method based on image processing, 2004 International Conference on Image Processing, 3 (2004), 1707-1710.
    [21] A. Fernandez, M. X. Alvarez, F. Bianconi, Texture description through histograms of equivalent patterns, J. Math. Imaging Vision, 45 (2013), 76-102. doi: 10.1007/s10851-012-0349-8
    [22] W. Xu, C. Tian, S. Fang, Fire automatic recognition based on image visual feature, Comput. Eng., 18 (2003), 112-113.
    [23] T. CElik, H. Demirel, Fire detection in video sequences using a generic color model, Fire Saf. J., 44 (2009), 147-158. doi: 10.1016/j.firesaf.2008.05.005
    [24] P. Foggia, A. Saggese, M. Vento, Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol., 25 (2015), 1545-1556. doi: 10.1109/TCSVT.2015.2392531
    [25] S. Frizzi, R. Kaabi, M. Bouchouicha, J. Ginoux, E. Moreau, F. Fnaiech, Convolutional neural network for video fire and smoke detection, IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, 2016,877-882.
    [26] T. J. Fu, C. E. Zheng, Y. Tian, Q. M. Qiu, S. J. Lin, Forest fire recognition based on deep convolutional neural network under complex background, Comput. Modernization, 3 (2016), 52-57.
    [27] M. Kang, B. S. Wang, An image filtering method based on image enhancement, Geomatics Inf. Sci. Wuhan Univ., 34 (2009), 822-825.
    [28] Y. Gu, L. J. Qin, L. L. Jiang, Research on PCA and K-SVD joint filtering method, Electro-Optic Technol. Appl., 31 (2016), 31-36+45.
    [29] Y. Q. Zhao, J. Yang, Hyperspectral image denoising via sparse representation and low-rank constraint, IEEE Trans. Geosci. Remote Sens., 53 (2014), 296-308.
    [30] Y. Xu, Z. Wu, Z. Wei, Spectral-spatial classification of hyperspectral image based on low-rank decomposition, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (2015), 2370-2380. doi: 10.1109/JSTARS.2015.2434997
    [31] I. Turkmen, The ANN based detector to remove random-valued impulse noise in images, J. Visual Commun. Image Representation, 34 (2016), 28-36. doi: 10.1016/j.jvcir.2015.10.011
    [32] K. J. Wang, X. Y. Xiong, Z. Ren, Highly efficient mean filtering algorithm, Appl. Res. Comput., 27 (2010), 434-438.
    [33] D. Goyal, M. Singhal, Area-efficient FPGA model of LMS filtering algorithm, Proceedings of the International Conference on Recent Cognizance in Wireless Communication & Image Processing, 2016,943-952.
    [34] B. Hu, Infrared image de-noising based on wavelet transform and improved median filtering, Mod. Electro. Tech., 34 (2011), 50-52.
    [35] W. L. Jiang, G. L. Li, W. B. Luo, Application of improved median filtering algorithm to image de-noising, Adv. Mater. Res., 998 (2014), 838-841.
    [36] H. K. Xu, Y. Y. Qin, H. R. Chen, An improved edge detection algorithm based on Canny, Infrared Technol., 36 (2014), 210-214.
    [37] Q. Zhang, J. Xu, L. Xu, H. Guo, Deep convolutional neural networks for forest fire detection, 2016 International Forum on Management, Education and Information Technology Application, Atlantis Press, 2016.
    [38] Y. Zhao, Z. Zhou, M. Xu, Forest fire smoke video detection using spatiotemporal and dynamic texture features, J. Electr. Comput. Eng., 2015 (2015), 1-7.
    [39] G. F. Shidik, F. N. Adnan, C. Supriyanto, R. A. Pramunendar, P. N. Andono, Multi color feature, background subtraction and time frame selection for fire detection, 2013 International Conference on Robotics, Biomimetics, Intelligent Computational Systems. IEEE, 2013,115-120.
    [40] R. C. Gonzalez, R. E. Woods, Digital image processing (3rd Edition), Prentice-Hall, Inc., 2007.
    [41] Y. Wang, H. Wang, C. Yin, M. Dai, Biologically inspired image enhancement based on Retinex, Neurocomputing, 177 (2016), 373-384. doi: 10.1016/j.neucom.2015.10.124
    [42] J. Chen, Y. He, J. Wang, Multi-feature fusion based fast video flame detection, Build. Environ., 45 (2010), 1113-1122. doi: 10.1016/j.buildenv.2009.10.017
    [43] B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, A. E. Cetin, Computer vision based method for real-time fire and flame detection, Pattern Recognit. Lett., 27 (2006), 49-58. doi: 10.1016/j.patrec.2005.06.015
    [44] Y. J. Hu, Z. F. Li, Y. M. Hu, Theory and application of motion analysis based on optical flow, Comput. Meas. Control, 15 (2007), 219-221.
    [45] H. Zhu, D. Y. Luo, Q. X. Cao, Moving objects detection algorithm based on two consecutive frames subtraction and background subtraction, Comput. Meas. Control, 13 (2005), 215-217.
    [46] T. Zhang, H. Zhang, R. Wang, Y. Wu, A new JPEG image steganalysis technique combining rich model features and convolutional neural networks, Math. BioSci. Eng., 16 (2019), 4069-4081. doi: 10.3934/mbe.2019201
    [47] K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising, IEEE Trans. Image Process., 26 (2016), 3142-3155.
    [48] E. K. Wang, F. Wang, R. Sun, X. Liu, A new privacy attack network for remote sensing images classification with small training samples, Math. BioSci. Eng., 16 (2019), 4456-4476. doi: 10.3934/mbe.2019222
    [49] J. Dunnings, T. P. Breckon, Experimentally defined convolutional neural network architecture variants for non-temporal real-time fire detection, 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, 1558-1562.
    [50] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al, Going deeper with convolutions, Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, 2015, 1-9.
    [51] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, 2818-2826.
    [52] J. Zhang, C. Lu, X. Li, H. J. Kim, J. Wang, A full convolutional network based on DenseNet for remote sensing scene classification, Math. Biosci. Eng., 16 (2019), 3345-3367. doi: 10.3934/mbe.2019167
    [53] J. O'Malley, K. H. Zou, Bayesian multivariate hierarchical transformation models for ROC analysis, Stat. Med., 25 (2010), 459-479.
    [54] J. Sharma, O. C. Granmo, M. Goodwin, J. T. Fidje, Deep convolutional neural networks for fire detection in images, International Conference on Engineering Applications of Neural Networks, 2017,183-193.
    [55] A. Fernandez, M. X. Alvarez, F. Bianconi, Texture description through histograms of equivalent patterns, J. Math. Imaging Vision, 45 (2013), 76-102. doi: 10.1007/s10851-012-0349-8
    [56] J. T. Shi, F. N. Yuan, X. Xia, Research progress of video smoke detection, J. Image Graphics, 23 (2018), 303-322.
  • This article has been cited by:

    1. V. Gowthami, K. Bhoopathy Bagan, S. Ewins Pon Pushpa, A novel approach towards high-performance image compression using multilevel wavelet transformation for heterogeneous datasets, 2023, 79, 0920-8542, 2488, 10.1007/s11227-022-04744-5
    2. Noor A. Ibraheem, Noor M. Abdulhadi, Mokhtar M. Hasan, 2022, Chapter 31, 978-981-16-9604-6, 459, 10.1007/978-981-16-9605-3_31
    3. Fintan Nagle, Alan Johnston, Recognising the dynamic form of fire, 2021, 11, 2045-2322, 10.1038/s41598-021-89453-4
    4. Deepa K R, Chaitra A S, Jhansi K, Anitha Kumari R D, Ashwini kumari P, Mallikarjun M Kodabagi, 2022, Development of Fire Detection surveillance using machine learning & IoT, 978-1-6654-9790-9, 1, 10.1109/MysuruCon55714.2022.9972725
    5. Tannisha Kundu, Kishore Kumar Senapati, 2022, Analysis of Deep Learning Techniques Used for Indoor Flame Detection, 978-1-6654-2416-5, 1, 10.1109/ICIBT52874.2022.9807802
    6. Hao Wu, Aihua Zhang, Ying Han, Juan Nan, Kun Li, Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition, 2022, 245, 09507051, 108626, 10.1016/j.knosys.2022.108626
    7. John Paul Q. Tomas, Jean Isaiah Dava, Tia Julienne Espejo, Hanna Katherine M. Medina, Bonifacio T. Doma, 2024, Early Fire Detection and Segmentation Using Frame Differencing and Deep Learning Algorithms with an Indoor Dataset, 9798400716546, 158, 10.1145/3647750.3647775
    8. Guangtao Cheng, Xue Chen, Chenyi Wang, Xiaobo Li, Baoyi Xian, Hao Yu, Visual fire detection using deep learning: A survey, 2024, 596, 09252312, 127975, 10.1016/j.neucom.2024.127975
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6786) PDF downloads(406) Cited by(8)

Figures and Tables

Figures(9)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog