The increasing popularity of wind energy has led to an extensive growth in wind turbine installations. This poses challenges when these wind turbine blades reach the end of their operational lifespan. Thus, suitable recycling techniques are required to extract the different materials from the turbine blades. Therefore, we explored the use of deep learning-based object detection for sorting shredded End-of-Life wind turbine blades into individual material classes. To reach this goal, a custom dataset combining images from shredded wind turbine blades, with a synthetic data generation approach, was employed. Three popular object detection architectures (SSD, YOLO, and Faster R-CNN) were implemented and tested. The impact of different popular backbone networks and Feature Pyramid Networks on accuracy and speed was analyzed. Our results showed that the SSD model with a ResNet18 backbone and a version of Feature Pyramid Network performed well in terms of accuracy and speed. Moreover, synthetic data generation proved useful but showed a performance decline when transitioning to real-world pictures, suggesting the need for combining synthetic and real-world data. This study highlights the potential of deep learning in recycling and sorting wind turbine blades, encouraging further research in this field.
Citation: Christian Linder, Fabian Rechsteiner, Samuel Eiler. Exploring deep learning for sorting shredded end-of-life wind turbine blades[J]. Clean Technologies and Recycling, 2026, 6(1): 106-201. doi: 10.3934/ctr.2026005
The increasing popularity of wind energy has led to an extensive growth in wind turbine installations. This poses challenges when these wind turbine blades reach the end of their operational lifespan. Thus, suitable recycling techniques are required to extract the different materials from the turbine blades. Therefore, we explored the use of deep learning-based object detection for sorting shredded End-of-Life wind turbine blades into individual material classes. To reach this goal, a custom dataset combining images from shredded wind turbine blades, with a synthetic data generation approach, was employed. Three popular object detection architectures (SSD, YOLO, and Faster R-CNN) were implemented and tested. The impact of different popular backbone networks and Feature Pyramid Networks on accuracy and speed was analyzed. Our results showed that the SSD model with a ResNet18 backbone and a version of Feature Pyramid Network performed well in terms of accuracy and speed. Moreover, synthetic data generation proved useful but showed a performance decline when transitioning to real-world pictures, suggesting the need for combining synthetic and real-world data. This study highlights the potential of deep learning in recycling and sorting wind turbine blades, encouraging further research in this field.
| [1] | Andersen PD, Bonou A, Beauson J, et al. (2014) Recycling of wind turbines. DTU Int Energy Rep 2014: 91–97. |
| [2] |
Veers PS, Ashwill TD, Sutherland HJ, et al. (2003) Trends in the design, manufacture and evaluation of wind turbine blades. Wind Energy 6: 245–259. https://doi.org/10.1002/we.90 doi: 10.1002/we.90
|
| [3] | Brondsted P, Njissen R, Goutianos S (2023) Advances in wind turbine blade design and materials. Woodhead Publishing. |
| [4] |
Hao S, Kuah AT, Rudd CD, et al. (2020) A circular economy approach to green energy: Wind turbine, waste, and material recovery. Sci Total Environ 702: 135054. https://doi.org/10.1016/j.scitotenv.2019.135054 doi: 10.1016/j.scitotenv.2019.135054
|
| [5] | WindEurope (2024) How to build a circular economy for wind turbine blades through policy and partnerships. Available from: https://windeurope.org/newsroom/press-releases/wind-industry-calls-for-europe-wide-ban-on-landfilling-turbine-blades/ |
| [6] |
David E, Kopac J (2015) Use of separation and impurity removal methods to improve Aluminium waste recycling process. Mater Today Proceed 2: 5071–5079. https://doi.org/10.1016/j.matpr.2015.10.098 doi: 10.1016/j.matpr.2015.10.098
|
| [7] |
Guo H, Guo X, Zhang X, et al. (2025) Fault diagnosis of wind turbine based on dual-channel feature aggregation network with attentional mechanism. Eng Appl Artif Intell 161: 112291. https://doi.org/10.1016/j.engappai.2025.112291 doi: 10.1016/j.engappai.2025.112291
|
| [8] |
Meng D, Yang H, Yang S, et al. (2024) Kriging-assisted hybrid reliability design and optimization of offshore wind turbine support structure based on a portfolio allocation strategy. Ocean Eng 295: 116842. https://doi.org/10.1016/j.oceaneng.2024.116842 doi: 10.1016/j.oceaneng.2024.116842
|
| [9] |
Yang S, Chen Y (2025) Modelling and analysis of offshore wind turbine gearbox under multi-field coupling. IJOSM 2: 52–66. https://doi.org/10.1504/IJOSM.2025.146804 doi: 10.1504/IJOSM.2025.146804
|
| [10] |
Xu S, Abbas Z, Zhang X-G, et al. (2025) Joint optimization of utility and privacy for space-air-ground integrated network task offloading: A Co-D3QN approach. IEEE Trans Veh Technol: 1–16. https://doi.org/10.1109/TVT.2025.3623376 doi: 10.1109/TVT.2025.3623376
|
| [11] |
Gundupalli SP, Hait S, Thakur A (2017) A review on automated sorting of source-separated municipal solid waste for recycling. Waste Manag 60: 56–74. https://doi.org/10.1016/j.wasman.2016.09.015 doi: 10.1016/j.wasman.2016.09.015
|
| [12] |
Ren S, He K, Girshick R, et al. (2017) Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39: 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
|
| [13] | He K, Gkioxari G, Dollár P, et al. (2017) Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision: 2961–2969. https://doi.org/10.1109/ICCV.2017.322 |
| [14] |
He K, Zhang X, Ren S, et al. (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37: 1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824 doi: 10.1109/TPAMI.2015.2389824
|
| [15] | Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision: 1440–1448. https://doi.org/10.1109/ICCV.2015.169 |
| [16] | Redmon J (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition: 779–788. https://doi.org/10.1109/CVPR.2016.91 |
| [17] | Liu W, Anguelov D, Erhan D, et al. (2016) SSD: Single Shot MultiBox Detector. Computer Vision – ECCV 2016 9905: 21–37. https://doi.org/10.1007/978-3-319-46448-0_2 |
| [18] |
Voulodimos A, Doulamis N, Doulamis A, et al. (2018) Deep learning for computer vision: A brief review. Comput Intell Neurosci 2018: 1–13. https://doi.org/10.1155/2018/7068349 doi: 10.1155/2018/7068349
|
| [19] |
Chai J, Zeng H, Li A, et al. (2021) Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach Learn Appl 6: 100134. https://doi.org/10.1016/j.mlwa.2021.100134 doi: 10.1016/j.mlwa.2021.100134
|
| [20] | Zou Z, Chen K, Shi Z, et al. (2023) Object detection in 20 years: A survey. In: Proceedings of the IEEE 111: 257–276. https://doi.org/10.1109/JPROC.2023.3238524 |
| [21] | Vaswani A, Shazeer N, Parmar N, et al. (2017) Attention is all you need. arXiv preprint arXiv: 1706.03762. |
| [22] |
Khan S, Naseer M, Hayat M, et al. (2022) Transformers in vision: A survey. ACM Comput Surveys (CSUR) 54: 1–41. https://doi.org/10.1145/3505244 doi: 10.1145/3505244
|
| [23] |
Lin T-Y, Maire M, Belongie S, et al. (2014) Microsoft COCO: Common objects in context. Computer vision - ECCV 2014 8693: 740–755. https://doi.org/10.1007/978-3-319-10602-1_48 doi: 10.1007/978-3-319-10602-1_48
|
| [24] | Z. Ge, S. Liu, F. Wang, et al. (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv: 2107.08430. |
| [25] |
Carion N, Massa F, Synnaeve G, et al. (2020) End-to-end object detection with transformers. Computer Vision – ECCV 2020 12346: 213–229. https://doi.org/10.1007/978-3-030-58452-8_13 doi: 10.1007/978-3-030-58452-8_13
|
| [26] |
Kulshreshtha M, Chandra SS, Randhawa P, et al. (2021) OATCR: Outdoor autonomous trash-collecting robot design using YOLOv4-tiny. Electronics 10: 2292. https://doi.org/10.3390/electronics10182292 doi: 10.3390/electronics10182292
|
| [27] | Proença PF, Simões P (2020) TACO: Trash annotations in context for litter detection. arXiv preprint arXiv: 2003.06975. |
| [28] |
Melinte DO, Travediu A-M, Dumitriu DN (2020) Deep convolutional neural networks object detector for real-time waste identification. Appl Sci 10: 7301. https://doi.org/10.3390/app10207301 doi: 10.3390/app10207301
|
| [29] |
Koskinopoulou M, Raptopoulos F, Papadopoulos G, et al. (2021) Robotic waste sorting technology: Toward a vision-based categorization system for the industrial robotic separation of recyclable waste. IEEE Robot Automat Mag 28: 50–60. https://doi.org/10.1109/MRA.2021.3066040 doi: 10.1109/MRA.2021.3066040
|
| [30] |
Mao W-L, Chen W-C, Fathurrahman HIK, et al. (2022) Deep learning networks for real-time regional domestic waste detection. J Clean Product 344: 131096. https://doi.org/10.1016/j.jclepro.2022.131096 doi: 10.1016/j.jclepro.2022.131096
|
| [31] | Lin T-Y, Goyal P, Girshick R et al. (2017). Focal loss for dense object detection. Proceedings of the IEEE international conference on computer vision: 2980-2988 |
| [32] | Tan M, Pang R, Le QV (2020) EfficientDet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition: 10781–10790. https://doi.org/10.1109/CVPR42600.2020.01079 |
| [33] | G. Jocher, A. Chaurasia, A. Stoken, et al. (2022) ultralytics/yolov5: v7. 0-yolov5 sota realtime instance segmentation. https://doi.org/10.5281/zenodo.7347926 |
| [34] | Li C, Li L, Jiang H, et al. (2022) YOLOv6: A single-stage object detection framework for industrial applications. arXiv preprint arXiv: 2209.02976. |
| [35] | Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980. |
| [36] | Biewald L (2020) Experiment tracking with weights and biases. https://www.wandb.com/ |