Research article

Tiny bird detection and location guided by heterogeneous binocular images in transformer substation scene

  • Published: 23 January 2026
  • Bird activities like nesting and perching in transformer substations threaten power grid stability by causing short circuits and insulation failures. Existing bird repelling devices are inefficient due to the lack of accurate detection and positioning, leading to energy waste and safety hazards from continuous operation. To address this, this paper develops a tiny bird detection and location system guided by heterogeneous binocular images for precise, targeted repulsion. For well-lit scenarios, a two-stage contextual information enhancement network is proposed. It mines multiscale context to highlight tiny bird regions, fuses context with second-stage features via channel dimension enhancement, and uses spatial attention for accurate localization. For low-light or occluded scenes, a multiscale contextual feature enhancement network processes infrared images, adopting multibranch cross-level feature fusion and combining transformer with multisize convolution to suppress background and thermal radiation interference. Additionally, heterogeneous binocular cameras are calibrated to calculate bird spatial distance, integrating detection results with spatial information to drive a laser repelling device. Experimental results in real substation environments show the system meets engineering requirements for robustness and accuracy. The detection in visible images achieves an overall average precision of $ 59.8 $%, while the infrared detection outperforms advanced algorithms in key metrics. The spatial localization error is controlled within $ 4.9 $%, significantly improving bird expulsion success rate and reducing energy consumption. This work provides a reliable technical solution for safeguarding power grid operation and offers valuable references for tiny object detection in complex industrial scenarios.

    Citation: Qiyun Yin, Xiao Li, Chang Xu, Yanjie An, Qingwu Li. Tiny bird detection and location guided by heterogeneous binocular images in transformer substation scene[J]. Electronic Research Archive, 2026, 34(2): 777-812. doi: 10.3934/era.2026035

    Related Papers:

  • Bird activities like nesting and perching in transformer substations threaten power grid stability by causing short circuits and insulation failures. Existing bird repelling devices are inefficient due to the lack of accurate detection and positioning, leading to energy waste and safety hazards from continuous operation. To address this, this paper develops a tiny bird detection and location system guided by heterogeneous binocular images for precise, targeted repulsion. For well-lit scenarios, a two-stage contextual information enhancement network is proposed. It mines multiscale context to highlight tiny bird regions, fuses context with second-stage features via channel dimension enhancement, and uses spatial attention for accurate localization. For low-light or occluded scenes, a multiscale contextual feature enhancement network processes infrared images, adopting multibranch cross-level feature fusion and combining transformer with multisize convolution to suppress background and thermal radiation interference. Additionally, heterogeneous binocular cameras are calibrated to calculate bird spatial distance, integrating detection results with spatial information to drive a laser repelling device. Experimental results in real substation environments show the system meets engineering requirements for robustness and accuracy. The detection in visible images achieves an overall average precision of $ 59.8 $%, while the infrared detection outperforms advanced algorithms in key metrics. The spatial localization error is controlled within $ 4.9 $%, significantly improving bird expulsion success rate and reducing energy consumption. This work provides a reliable technical solution for safeguarding power grid operation and offers valuable references for tiny object detection in complex industrial scenarios.



    加载中


    [1] H. Zhou, H. Zhang, A. Wang, X. Fang, H. Xu, Y. Song, Application of intelligent bird repelling technology for power transmission and transformation equipment, in 2024 IEEE 6th International Conference on Power, Intelligent Computing and Systems (ICPICS), (2024), 289–293. https://doi.org/10.1109/ICPICS62053.2024.10796329
    [2] Q. Chen, J. Xie, Q. Yu, C. Liu, W. Ding, X. Li, et al., An experimental study of acoustic bird repellents for reducing bird encroachment in pear orchards, Front. Plant Sci., 15 (2024), 1365275. https://doi.org/10.3389/fpls.2024.1365275 doi: 10.3389/fpls.2024.1365275
    [3] S. Shanmugapriya, S. Srikanth, S. Stanley, S. Surya, Intelligent IoT system for animal detection and crop defense using acoustic and visual repellents, in 2025 3rd International Conference on Artificial Intelligence and Machine Learning Applications Theme: Healthcare and Internet of Things (AIMLA), (2025), 1–5. https://doi.org/10.1109/AIMLA63829.2025.11040691
    [4] Y. Chen, J. Chu, K. Hsieh, T. Lin, P. Chang, Y. Tsai, Automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism, Sci. Rep., 14 (2024), 15924. https://doi.org/10.1038/s41598-024-66920-2 doi: 10.1038/s41598-024-66920-2
    [5] J. Wu, Z. Pan, B. Lei, Y. Hu, FSANet: Feature-and-spatial-aligned network for tiny object detection in remote sensing images, IEEE Trans. Geosci. Remote Sens., 60 (2022), 1–17. https://doi.org/10.1109/TGRS.2022.3205052 doi: 10.1109/TGRS.2022.3205052
    [6] D. Liu, J. Zhang, Y. Qi, Y. Wu, Y. Zhang, A tiny object detection method based on explicit semantic guidance for remote sensing images, IEEE Geosci. Remote Sens. Lett., 21 (2024), 1–5. https://doi.org/10.1109/LGRS.2024.3374418 doi: 10.1109/LGRS.2024.3374418
    [7] L. Xu, Y. Song, W. Zhang, Y. An, Y. Wang, H. Ning, An efficient foreign objects detection network for power substation, Image Vision Comput., 109 (2021), 104159. https://doi.org/10.1016/j.imavis.2021.104159 doi: 10.1016/j.imavis.2021.104159
    [8] J. Ou, J. Wang, J. Xue, J. Wang, X. Zhou, L. She, et al., Infrared image target detection of substation electrical equipment using an improved faster R-CNN, IEEE Trans. Power Delivery, 38 (2023), 387–396. https://doi.org/10.1109/TPWRD.2022.3191694 doi: 10.1109/TPWRD.2022.3191694
    [9] S. Qiao, L. Chen, A. Yuille, DetectoRS: Detecting objects with recursive feature pyramid and switchable atrous convolution, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 10208–10219. https://doi.org/10.1109/CVPR46437.2021.01008
    [10] M. Yue, L. Zhang, J. Huang, H. Zhang, Lightweight and efficient tiny-object detection based on improved yolov8n for UAV aerial images, Drones, 8 (2024), 276. https://doi.org/10.1109/aiac63745.2024.10899550 doi: 10.1109/aiac63745.2024.10899550
    [11] B. Mahaur, K. K. Mishra, A. Kumar, An improved lightweight small object detection framework applied to real-time autonomous driving, Expert Syst. Appl., 234 (2023), 121036. https://doi.org/10.1016/j.eswa.2023.121036 doi: 10.1016/j.eswa.2023.121036
    [12] Z. Zhao, J. Du, C. Li, X. Fang, Y. Xiao, J. Tang, Dense tiny object detection: A scene context guided approach and a unified benchmark, IEEE Trans. Geosci. Remote Sens., 62 (2024), 1–13. https://doi.org/10.1109/TGRS.2024.3357706 doi: 10.1109/TGRS.2024.3357706
    [13] J. Xiao, H. Guo, J. Zhou, T. Zhao, Q. Yu, Y. Chen, et al., Tiny object detection with context enhancement and feature purification, Expert Syst. Appl., 211 (2023), 118665. https://doi.org/10.1016/j.eswa.2022.118665 doi: 10.1016/j.eswa.2022.118665
    [14] S. Huang, C. Lin, X. Jiang, Z. Qu, BRSTD: Bio-inspired remote sensing tiny object detection, IEEE Trans. Geosci. Remote Sens., 62 (2024), 1–15. https://doi.org/10.1109/TGRS.2024.3470900 doi: 10.1109/TGRS.2024.3470900
    [15] H. Liu, Y. Tseng, K. Chang, P. Wang, H. Shuai, W. Cheng, A denoising FPN with transformer R-CNN for tiny object detection, IEEE Trans. Geosci. Remote Sens., 62 (2024), 1–15. https://doi.org/10.1109/TGRS.2024.3396489 doi: 10.1109/TGRS.2024.3396489
    [16] J. Leng, Y. Ren, W. Jiang, X. Sun, Y. Wang, Realize your surroundings: Exploiting context information for small object detection, Neurocomputing, 433 (2021), 287–299. https://doi.org/10.1016/j.neucom.2020.12.093 doi: 10.1016/j.neucom.2020.12.093
    [17] D. Ma, B. Liu, Q. Huang, Q. Zhang, MwdpNet: Towards improving the recognition accuracy of tiny targets in high-resolution remote sensing image, Sci. Rep., 13 (2023), 13890. https://doi.org/10.1038/s41598-023-41021-8 doi: 10.1038/s41598-023-41021-8
    [18] C. Xu, J. Ding, J. Wang, W. Yang, H. Yu, L. Yu, et al., Dynamic coarse-to-fine learning for oriented tiny object detection, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2023), 7318–7328. https://doi.org/10.1109/CVPR52729.2023.00707
    [19] D. Liu, J. Zhang, Y. Qi, Y. Wu, Y. Zhang, A tiny object detection method based on explicit semantic guidance for remote sensing images, IEEE Trans. Geosci. Remote Sens., 21 (2024), 1–5. https://doi.org/10.1109/LGRS.2024.3374418 doi: 10.1109/LGRS.2024.3374418
    [20] C. W. Corsel, M. van Lier, L. Kampmeijer, N. Boehrer, E. M. Bakker, Exploiting temporal context for tiny object detection, in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), (2023), 1–11. https://doi.org/10.1109/WACVW58289.2023.00013
    [21] B. Li, C. Xiao, L. Wang, Y. Wang, Z. Lin, M. Li, et al., Dense nested attention network for infrared small target detection, IEEE Trans. Image Process., 32 (2022), 1745–1758. https://doi.org/10.1109/TIP.2022.3199107 doi: 10.1109/TIP.2022.3199107
    [22] X. Wu, D. Hong, J. Chanussot, UIU-Net: U-Net in U-Net for infrared small object detection, IEEE Trans. Image Process., 32 (2023), 364–376. https://doi.org/10.1109/TIP.2022.3228497 doi: 10.1109/TIP.2022.3228497
    [23] T. Zhang, L. Li, S. Cao, T. Pu, Z. Peng, Attention-guided pyramid context networks for detecting infrared small target under complex background, IEEE Trans. Aerosp. Electron. Syst., 59 (2023), 4250–4261. https://doi.org/10.1109/TAES.2023.3238703 doi: 10.1109/TAES.2023.3238703
    [24] L. Huang, S. Dai, T. Huang, X. Huang, H. Wang, Infrared small target segmentation with multiscale feature representation, Infrared Phys. Technol., 116 (2021), 103755. https://doi.org/10.1016/j.infrared.2021.103755 doi: 10.1016/j.infrared.2021.103755
    [25] X. Xiong, K. Wang, J. Chen, T. Li, B. Lu, F. Ren, A calibration system of intelligent driving vehicle mounted scene projection camera based on zhang zhengyou calibration method, in 2022 34th Chinese Control and Decision Conference (CCDC), (2022), 747–750. https://doi.org/10.1109/CCDC55256.2022.10034031
    [26] C. Xu, J. Wang, W. Yang, H. Yu, L. Yu, G. Xia, Detecting tiny objects in aerial images: A normalized Wasserstein distance and a new benchmark, ISPRS J. Photogramm. Remote Sens., 190 (2022), 79–93. https://doi.org/10.1016/j.isprsjprs.2022.06.002 doi: 10.1016/j.isprsjprs.2022.06.002
    [27] J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2020), 2011–2023. https://doi.org/10.1109/TPAMI.2019.2913372 doi: 10.1109/TPAMI.2019.2913372
    [28] F. Chollet, Xception: Deep learning with depthwise separable convolutions, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1800–1807. https://doi.org/10.1109/CVPR.2017.195
    [29] X. Chen, Y. Zhang, Y. Dong, B. Du, Spatial-spectral contrastive self-supervised learning with dual path networks for hyperspectral target detection, IEEE Trans. Geosci. Remote Sens., 62 (2024), 1–12. https://doi.org/10.1109/TGRS.2024.3390946 doi: 10.1109/TGRS.2024.3390946
    [30] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 9992–10002. https://doi.org/10.1109/ICCV48922.2021.00986
    [31] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
    [32] Z. Tian, C. Shen, H. Chen, T. He, FCOS: A simple and strong anchor-free object detector, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 1922–1933. https://doi.org/10.1109/TPAMI.2020.3032166 doi: 10.1109/TPAMI.2020.3032166
    [33] Z. Cai, N. Vasconcelos, Cascade R-CNN: Delving into high quality object detection, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 6154–6162. https://doi.org/10.1109/CVPR.2018.00644
    [34] K. Sun, B. Xiao, D. Liu, J. Wang, Deep high-resolution representation learning for human pose estimation, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 5686–5696. https://doi.org/10.1109/CVPR.2019.00584
    [35] T. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2020), 318–327. https://doi.org/10.1109/TPAMI.2018.2858826 doi: 10.1109/TPAMI.2018.2858826
    [36] J. Terven, D. Córdova-Esparza, J. Romero-González, A comprehensive review of YOLO architectures in computer vision: From YOLOv1 to YOLOv8 and YOLO-NAS, Mach. Learn. Knowl. Extr., 5 (2023), 1680–1716. https://doi.org/10.3390/make5040083 doi: 10.3390/make5040083
    [37] Z. Ge, S. Liu, F. Wang, Z. Li, J. Sun, YOLOX: Exceeding YOLO series in 2021, preprint, arXiv: 2107.08430.
    [38] C. Xu, J. Wang, W. Yang, H. Yu, L. Yu, G. Xia, RFLA: Gaussian receptive field based label assignment for tiny object detection, in Computer Vision-ECCV 2022, (2022), 526–543. https://doi.org/10.1007/978-3-031-20077-9_31
    [39] C. Xu, J. Wang, W. Yang, L. Yu, Dot distance for tiny object detection in aerial images, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 1192–1201. https://doi.org/10.1109/CVPRW53098.2021.00130
    [40] X. Yuan, G. Cheng, K. Yan, Q. Zeng, J. Han, Small object detection via coarse-to-fine proposal generation and imitation learning, in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), (2023), 6294–6304. https://doi.org/10.1109/ICCV51070.2023.00581
    [41] Y. Dai, Y. Wu, F. Zhou, K. Barnard, Asymmetric contextual modulation for infrared small target detection, in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), (2021), 949–958. https://doi.org/10.1109/WACV48630.2021.00099
    [42] Y. Dai, Y. Wu, F. Zhou, K. Barnard, Attentional local contrast networks for infrared small target detection, IEEE Trans. Geosci. Remote Sens., 59 (2021), 9813–9824. https://doi.org/10.1109/TGRS.2020.3044958 doi: 10.1109/TGRS.2020.3044958
    [43] Q. Hou, L. Zhang, F. Tan, Y. Xi, H. Zheng, N. Li, ISTDU-Net: Infrared small-target detection U-Net, IEEE Geosci. Remote Sens. Lett., 19 (2022), 1–5. https://doi.org/10.1109/LGRS.2022.3141584 doi: 10.1109/LGRS.2022.3141584
    [44] K. Wang, S. Du, C. Liu, Z. Cao, Interior attention-aware network for infrared small target detection, IEEE Trans. Geosci. Remote Sens., 60 (2022), 1–13. https://doi.org/10.1109/TGRS.2022.3163410 doi: 10.1109/TGRS.2022.3163410
    [45] H. Sun, J. Bai, F. Yang, X. Bai, Receptive-field and direction induced attention network for infrared dim small target detection with a large-scale dataset irdst, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–13. https://doi.org/10.1109/TGRS.2023.3235150 doi: 10.1109/TGRS.2023.3235150
    [46] K. Oksuz, B. C. Cam, E. Akbas, S. Kalkan, Localization recall precision (LRP): A new performance metric for object detection, in Computer Vision-ECCV 2018, 11211 (2018), 521–537. https://doi.org/10.1007/978-3-030-01234-2_31
    [47] J. Wang, W. Yang, H. Guo, R. Zhang, G. Xia, Tiny object detection in aerial images, in 2020 25th International Conference on Pattern Recognition (ICPR), (2021), 3791–3798. https://doi.org/10.1109/ICPR48806.2021.9413340
  • Reader Comments
  • © 2026 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(438) PDF downloads(21) Cited by(0)

Article outline

Figures and Tables

Figures(14)  /  Tables(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog