Research article Special Issues

Innovative sign language accessibility technique for hearing and speech impaired: deep learning-based hand gesture recognition for communication

  • Published: 03 November 2025
  • MSC : 37M10

  • Sign language (SL) plays a significant part in communication among people who are hearing and deaf. Silent people struggle to convey their message to others. Since most people have not received a formal language education, it is highly complex to transfer messages in an emergency. Hence, a solution to this problem is to convert SL into a human voice. Gesture-to-speech systems usually use either vision-based or non-vision-based technologies, such as cameras or wearable sensors. However, many existing solutions lack cost-effectiveness and flexibility; for example, some depend on specific hardware or only function in controlled environments. In this paper, the Advancing Sign Language Accessibility using Deep Learning-Based Hand Gesture Recognition (ASLA-DLHGR) technique for hearing and speech-impaired individuals is proposed. The goal of the ASLA-DLHGR technique is to recognize hand gestures for communication among disabled people. Initially, the data pre-processing process is performed using the bilateral filtering (BF) model. Furthermore, the ASLA-DLHGR technique employs the SqueezeNet model to learn composite features from the pre-processed data. Moreover, the tunicate swarm algorithm (TSA) based hyperparameter process is performed to enhance the performance of the SqueezeNet method. For the gesture recognition process, a hybrid of a convolutional neural network and a bidirectional long short-term memory (CNN-BiLSTM) method is implemented. To demonstrate the managed gesture recognition proficiency of the ASLA-DLHGR method, a comprehensive comparative study is carried out under the American SL dataset. The comparison study of the ASLA-DLHGR method portrayed a superior accuracy value of 99.98% over existing models.

    Citation: Najm Alotaibi, Alanoud Subahi, Nouf Atiahallah Alghanmi, Mohammed Rizwanullah. Innovative sign language accessibility technique for hearing and speech impaired: deep learning-based hand gesture recognition for communication[J]. AIMS Mathematics, 2025, 10(11): 25154-25174. doi: 10.3934/math.20251113

    Related Papers:

  • Sign language (SL) plays a significant part in communication among people who are hearing and deaf. Silent people struggle to convey their message to others. Since most people have not received a formal language education, it is highly complex to transfer messages in an emergency. Hence, a solution to this problem is to convert SL into a human voice. Gesture-to-speech systems usually use either vision-based or non-vision-based technologies, such as cameras or wearable sensors. However, many existing solutions lack cost-effectiveness and flexibility; for example, some depend on specific hardware or only function in controlled environments. In this paper, the Advancing Sign Language Accessibility using Deep Learning-Based Hand Gesture Recognition (ASLA-DLHGR) technique for hearing and speech-impaired individuals is proposed. The goal of the ASLA-DLHGR technique is to recognize hand gestures for communication among disabled people. Initially, the data pre-processing process is performed using the bilateral filtering (BF) model. Furthermore, the ASLA-DLHGR technique employs the SqueezeNet model to learn composite features from the pre-processed data. Moreover, the tunicate swarm algorithm (TSA) based hyperparameter process is performed to enhance the performance of the SqueezeNet method. For the gesture recognition process, a hybrid of a convolutional neural network and a bidirectional long short-term memory (CNN-BiLSTM) method is implemented. To demonstrate the managed gesture recognition proficiency of the ASLA-DLHGR method, a comprehensive comparative study is carried out under the American SL dataset. The comparison study of the ASLA-DLHGR method portrayed a superior accuracy value of 99.98% over existing models.



    加载中


    [1] M. M. Asiri, A. Motwakel, S. Drar, Enhanced bald eagle search optimizer with transfer learning-based sign language recognition for hearing-impaired persons, J. Disabil. Res., 2 (2023), 86–93. https://doi.org/10.57197/JDR-2023-0039 doi: 10.57197/JDR-2023-0039
    [2] F. Alrowais, S. S. Alotaibi, S. Dhahbi, R. Marzouk, A. Mohamed, A. M. Hilal, Sign language recognition and classification model to enhance quality of disabled people, CMC-Comput. Mater. Con., 73 (2022), 3419–3432. https://doi.org/10.32604/cmc.2022.029438 doi: 10.32604/cmc.2022.029438
    [3] M. Zakariah, Y. A. Alotaibi, D. Koundal, Y. H. Guo, M. Mamun Elahi, Sign language recognition for Arabic alphabets using transfer learning technique, Comput. Intel. Neurosc., 2022 (2022), 4567989. https://doi.org/10.1155/2022/4567989 doi: 10.1155/2022/4567989
    [4] N. K. B. Duraimutharasan, K. Sangeetha, Machine learning and vision based techniques for detecting and recognizing Indian sign language, Revue d'Intelligence Artificielle, 37 (2023), 1361–1366. https://doi.org/10.18280/ria.370529 doi: 10.18280/ria.370529
    [5] M. Potnis, D. Raul, M. Inamdar, Recognition of Indian sign language using machine learning algorithms, 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 2021, 579–584. https://doi.org/10.1109/SPIN52536.2021.9566141
    [6] E. Daniel, V. Kathiresan, C. Priyadarshini, R. G. Nancy P. Sindhu, Real time sign recognition using YOLOv8 object detection algorithm for Malayalam sign language, Fusion: Practice and Applications, 17 (2025), 135–145. https://doi.org/10.54216/FPA.170110 doi: 10.54216/FPA.170110
    [7] A. A. Alhussan, M. M. Eid, W. H. Lim, Advancing communication for the deaf: A convolutional model for Arabic sign Language recognition, Journal of Artificial Intelligence and Metaheuristics, 5 (2023), 38–45. https://doi.org/10.54216/JAIM.050104 doi: 10.54216/JAIM.050104
    [8] D. Kothadiya, C. Bhatt, K. Sapariya, K. Patel, A.-B. Gil-González, J. M. Corchado, Deepsign: Sign language detection and recognition using deep learning, Electronics, 11 (2022), 1780. https://doi.org/10.3390/electronics11111780 doi: 10.3390/electronics11111780
    [9] B. T. Abeje, A. O. Salau, A. D. Mengistu, N. K. Tamiru, Ethiopian sign language recognition using deep convolutional neural network, Multimed. Tools Appl., 81 (2022), 29027–29043. https://doi.org/10.1007/s11042-022-12768-5 doi: 10.1007/s11042-022-12768-5
    [10] R. S. A. Ameer, M. A. Ahmed, Z. T. Al-Qaysi, M. M. Salih, M. L. Shuwandy, Empowering communication: A deep learning framework for arabic sign language recognition with an attention mechanism, Computers, 13 (2024), 153. https://doi.org/10.3390/computers13060153 doi: 10.3390/computers13060153
    [11] A. Pandey, A. Chauhan, A. Gupta, V. Karnatak, Voice based Sign Language detection for dumb people communication using machine learning, J. Pharm. Negat. Result., 14 (2023), 22–30.
    [12] R. Valarmathi, P. J. Surya, P. Balaji, K. Ashik, Animated sign language for people with speaking and hearing disability using deep learning, 2024 International Conference on Communication, Computing and Internet of Things (IC3IoT), Chennai, India, 2024, 1–5. http://doi.org/10.1109/ic3iot60841.2024.10550211
    [13] M. Jebali, A. Dakhli, W. Bakari, Deep learning-based sign language recognition system for cognitive development, Cogn. Comput., 15 (2023), 2189–2201. https://doi.org/10.1007/s12559-023-10182-z doi: 10.1007/s12559-023-10182-z
    [14] A. S. M. Miah, M. Al M. S. Nishimura, J. Shin, Sign language recognition using graph and general deep neural network based on large scale dataset, IEEE Access, 12 (2024), 34553–34569. https://doi.org/10.1109/ACCESS.2024.3372425 doi: 10.1109/ACCESS.2024.3372425
    [15] V. Bhatt, R. Dash, Real-time hand gesture recognition for American sign language using CNN, mediapipe and convexity approach, In: Machine Learning, Image Processing, Network Security and Data Sciences, Cham: Springer, 2024, 260–271. https://doi.org/10.1007/978-3-031-62217-5_22
    [16] J. Shin, A. S. M. Miah, Y. Akiba, K. Hirooka, N. Hassan, Y. S. Hwang, Korean sign language Alphabet recognition through the integration of handcrafted and deep learning-based two-stream feature extraction approach, IEEE Access, 12 (2024), 68303–68318. https://doi.org/10.1109/ACCESS.2024.3399839 doi: 10.1109/ACCESS.2024.3399839
    [17] B. B. Jayasingh, K. M. S. Rani, K. Swathi, Hand gestures translation system for sgn language using CNN models, 2025 4th OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 5.0, Raigarh, India, 2025, 1–6. https://doi.org/10.1109/OTCON65728.2025.11071065
    [18] K. Aurangzeb, K. Javeed, M. Alhussein, I. Rida, S. I. Haider, A. Parashar, Deep learning approach for hand gesture recognition: applications in deaf communication and healthcare, CMC-Comput. Mater. Con., 78 (2024), 127–144. https://doi.org/10.32604/cmc.2023.042886 doi: 10.32604/cmc.2023.042886
    [19] A. T. Elgohr, M. S. Elhadidy, M. El-geneedy, S. Akram, M. A. A. Mousa, Advancing sign language recognition: A YOLO v. 11-based deep learning framework for Alphabet and transactional hand gesture detection, Proceedings of the AAAI Symposium Series., 6 (2025), 209–217. https://doi.org/10.1609/aaaiss.v6i1.36055 doi: 10.1609/aaaiss.v6i1.36055
    [20] C. Rathnayake, R. Gamage, K. D. R. N. Kalubowila, S. Thilakarathne, Development of a real-time Hand gesture recognition system for aid of hearing-impaired communication using flex sensors and machine learning algorithms, In: 2024 8th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI), 2024, 1–6. https://doi.org/10.1109/SLAAI-ICAI63667.2024.10844963
    [21] S. Malviya, A. Mahajan, K. K. Sethi, Machine learning framework for intelligent hand gesture recognition: An application to Indian sign language and hand talk, In: Proceedings of the International Conference on Recent Advancements and Modernisations in Sustainable Intelligent Technologies and Applications (RAMSITA 2025), Paris: Atlantis Press, 2025, 496–512. https://doi.org/10.2991/978-94-6463-716-8_39
    [22] A. S. M. Miah, M. A. M. Hasan, Y. Tomioka, J. Shin, Hand gesture recognition for multi-culture sign language using graph and general deep learning network, IEEE Open Journal of the Computer Society, 5 (2024), 144–155. https://doi.org/10.1109/OJCS.2024.3370971 doi: 10.1109/OJCS.2024.3370971
    [23] P. Singhal, S. Verma, R. Gupta, R. Kumar, R. K. Arya, Vision-based hand gesture recognition system for assistive communication using neural networks and GSM integration, 2025 2nd International Conference on Computational Intelligence, Communication Technology and Networking (CICTN), Ghaziabad, India, 2025, 891–895. https://doi.org/10.1109/CICTN64563.2025.10932562
    [24] A. Rehman, M. Zaman, T. Kehkashan, F. Akbar, M. Hamza, R. A. Riaz, Enhanced sign language detection with deep CNN: Achieving accuracy in hand gesture recognition, 2024 5th International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 2024, 1–6. https://doi.org/10.1109/ICIC63915.2024.11116573
    [25] C. M. Soukaina, M. Mohammed, R. Mohamed, Geometric feature-based machine learning for efficient hand sign gesture recognition, Statistics, Optimization & Information Computing, 13 (2025), 2027–2043. https://doi.org/10.19139/soic-2310-5070-2306 doi: 10.19139/soic-2310-5070-2306
    [26] S. Kukreja, A. Singh, V. Chauhan, Hand gesture vision: Integrating computer vision and machine learning for enhanced communication, 2024 1st International Conference on Advances in Computing, Communication and Networking (ICAC2N), Greater Noida, India, 2024, 35–40. https://doi.org/10.1109/ICAC2N63387.2024.10895070
    [27] L. S. Wu, L. Y. Fang, J. Yue, B. Zhang, P. Ghamisi, M. He, Deep bilateral filtering network for point-supervised semantic segmentation in remote sensing images, IEEE T. Image Process., 31 (2022), 7419–7434. https://doi.org/10.1109/TIP.2022.3222904 doi: 10.1109/TIP.2022.3222904
    [28] I. Camelo, Convolutional neural network-based object detection with limited embedded computational resources, PhD Thesis, Université du Québec en Outaouais, 2024.
    [29] V. Chandran, P. Mohapatra, A novel reinforcement learning-inspired tunicate swarm algorithm for solving global optimization and engineering design problems, J. Ind. Manag. Optim., 21 (2025), 565–612. https://doi.org/10.3934/jimo.2024095 doi: 10.3934/jimo.2024095
    [30] J. Li, Y. X. Wang, W. S. Liang, C. Xiong, W. B. Cai, L. J. Li, et al., Visual anomaly detection via CNN-BiLSTM network with knit feature sequence for Floating-Yarn stacking during the high-speed sweater knitting process, Electronics, 13 (2024), 3968. https://doi.org/10.3390/electronics13193968 doi: 10.3390/electronics13193968
    [31] A. Thakur, American sign language dataset, 2019. Available from: https://www.kaggle.com/datasets/ayuraj/asl-dataset.
    [32] M. M. Asiri, A. Motwakel, S. Drar, Robust sign language detection for hearing disabled persons by improved coyote optimization algorithm with deep learning, AIMS Mathematics, 9 (2024), 15911–15927. https://doi.org/10.3934/math.2024769 doi: 10.3934/math.2024769
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(457) PDF downloads(29) Cited by(0)

Article outline

Figures and Tables

Figures(11)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog