Research article

Multi-Stroke handwriting character recognition based on sEMG using convolutional-recurrent neural networks

  • Received: 14 May 2020 Accepted: 20 July 2020 Published: 12 August 2020
  • Despite the increasing use of technology, handwriting has remained to date as an efficient means of communication. Certainly, handwriting is a critical motor skill for childrens cognitive development and academic success. This article presents a new methodology based on electromyographic signals to recognize multi-user free-style multi-stroke handwriting characters. The approach proposes using powerful Deep Learning (DL) architectures for feature extraction and sequence recognition, such as convolutional and recurrent neural networks. This framework was thoroughly evaluated, obtaining an accuracy of 94.85%. The development of handwriting devices can be potentially applied in the creation of artificial intelligence applications to enhance communication and assist people with disabilities.

    Citation: Jose Guadalupe Beltran-Hernandez, Jose Ruiz-Pinales, Pedro Lopez-Rodriguez, Jose Luis Lopez-Ramirez, Juan Gabriel Avina-Cervantes. Multi-Stroke handwriting character recognition based on sEMG using convolutional-recurrent neural networks[J]. Mathematical Biosciences and Engineering, 2020, 17(5): 5432-5448. doi: 10.3934/mbe.2020293

    Related Papers:

  • Despite the increasing use of technology, handwriting has remained to date as an efficient means of communication. Certainly, handwriting is a critical motor skill for childrens cognitive development and academic success. This article presents a new methodology based on electromyographic signals to recognize multi-user free-style multi-stroke handwriting characters. The approach proposes using powerful Deep Learning (DL) architectures for feature extraction and sequence recognition, such as convolutional and recurrent neural networks. This framework was thoroughly evaluated, obtaining an accuracy of 94.85%. The development of handwriting devices can be potentially applied in the creation of artificial intelligence applications to enhance communication and assist people with disabilities.


    加载中


    [1] J. E. Maldarelli, B. A. Kahrs, S. C. Hunt, J. J. Lockman, Development of early handwriting: Visual-motor control during letter copying, Dev. Psychol., 51 (2015), 879-888.
    [2] J. Calvo-Zaragoza, J. Oncina, Recognition of pen-based music notation with finite-state machines, Expert Syst. Appl., 72 (2017), 395-406.
    [3] N. Mendes, M. Sim£o, P. Neto, Segmentation of electromyography signals for pattern recognition, IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society, 2019, 732-737.
    [4] M. Słapek, S. Paszkiel, Detection of gestures without begin and end markers by fitting into Bèzier curves with least squares method, Pattern Recognit. Lett., 100 (2017), 83-88.
    [5] K. A. Lamkin-Kennard, M. B. Popovic, Sensors: Natural and Synthetic Sensors, Biomechatronics, Elsevier, 2019, 81-107.
    [6] J. Wu, X. Li, W. Liu, Z. Jane Wang, sEMG Signal Processing Methods: A Review, J. Phys., 1237 (2019), 032008.
    [7] E. Guigon, P. Baraduc, M. Desmurget, Computational Motor Control: Redundancy and Invariance, J. Neurophysiol., 97 (2007), 331-347.
    [8] C. J. De Luca, Physiology and Mathematics of Myoelectric Signals, IEEE Trans. Biomed. Eng., BME-26 (1979), 313-325.
    [9] M. Sim£o, N. Mendes, O. Gibaru, P. Neto, A Review on Electromyography Decoding and Pattern Recognition for Human-Machine Interaction, IEEE Access, 7 (2019), 39564-39582.
    [10] Y. Gloumakov, J. Bimbo, A. M. Dollar, Trajectory Control For a Myoelectric Prosthetic Wrist, Myoelectric Controls Symposium, 2020.
    [11] A. Lansari, F. Bouslama, M. Khasawneh, A. Al-Rawi, A novel electromyography (EMG) based classification approach for Arabic handwriting, Proceedings of the International Joint Conference on Neural Networks, 2003.
    [12] G. Huang, D. Zhang, X. Zheng, X. Zhu, An EMG-based handwriting recognition through dynamic time warping, in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, IEEE, 2010, 4902-4905.
    [13] M. Linderman, M. A. Lebedev, J. S. Erlichman, Recognition of Handwriting from Electromyography, PLoS ONE, 4 (2009), e6791.
    [14] C. Li, Z. Ma, L. Yao, D. Zhang, Improvements on EMG-based handwriting recognition with DTW algorithm, in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2013.
    [15] I. Chihi, A. Afef, B. Mohamed, Analysis of Handwriting Velocity to Identify Handwriting Process from Electromyographic Signals, Am. J. Appl. Sci., 9 (2012), 1742-1756.
    [16] M. A. Slim, A. Abdelkrim, M. Benrejeb, An efficient handwriting velocity modelling for electromyographic signals reconstruction using Radial Basis Function neural networks, 2015 7th International Conference on Modelling, Identification and Control (ICMIC), IEEE, 2015, 1-6.
    [17] E. Okorokova, M. Lebedev, M. Linderman, A. Ossadtchi, A dynamical model improves reconstruction of handwriting from multichannel electromyographic recordings, Front. Neurosci., 9 (2015), 1-15.
    [18] W. Wei, Q. Dai, Y. Wong, Y. Hu, M. Kankanhalli, W. Geng, Surface-Electromyography-Based Gesture Recognition by Multi-View Deep Learning, IEEE Trans. Biomed. Eng., 66 (2019), 2964-2973.
    [19] A. Dash, A. Sahu, R. Shringi, J. Gamboa, M. Z. Afzal, M. I. Malik, et al., AirScript-Creating Documents in Air, 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2017.
    [20] P. Roy, S. Ghosh, U. Pal, A CNN Based Framework for Unistroke Numeral Recognition in Air-Writing, 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, 2018, 404-409.
    [21] N. Rusk, Deep learning, Nat. Methods, 13 (2016), 35.
    [22] H. Chen, Y. Zhang, G. Li, Y. Fang, H. Liu, Surface electromyography feature extraction via convolutional neural network, Int. J. Mach. Learn. Cybern., 11 (2020), 185-196.
    [23] M. Simão, P. Neto, O. Gibaru, EMG-based online classification of gestures with recurrent neural networks, Pattern Recognit. Lett., 128 (2019), 45-51.
    [24] Y. Lecun, Y. Bengio, G. Hinton, Deep learning, Nature, 521 (2015), 436-444.
    [25] D. H. Hubel, T. N. Wiesel, Receptive fields and functional architecture of monkey striate cortex, J. Physiol., 195 (1968), 215-243.
    [26] D. Marr, Analyzing natural images: A computational theory of texture vision., Cold Spring Harbor symposia on quantitative biology, Cold Spring Harbor Laboratory Press, 1976, 647-662.
    [27] K. Fukushima, S. Miyake, Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, Competition and Cooperation in Neural Nets, Springer, Berlin, Heidelberg, 1980, 267-285.
    [28] C. Lee, P. W. Gallagher, Z. Tu, Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree, Artificial intelligence and statistics, 2016, 464-472.
    [29] J. Weng, N. Ahuja, T. S. Huang, Cresceptron: A self-organizing neural network which grows adaptively, International Joint Conference on Neural Networks (IJCNN), 1992, 576-581.
    [30] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, et al., Backpropagation Applied to Handwritten Zip Code Recognition, Neural Comput., 1 (1989), 541-551.
    [31] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278-2324.
    [32] W. Zhang, K. Itoh, J. Tanida, Y. Ichioka, Parallel distributed processing model with local space-invariant interconnections and its optical architecture, Appl. Optics, 29 (1990), 4790-4797.
    [33] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, et al., Recent advances in convolutional neural networks, Pattern Recognit., 77 (2018), 354 - 377.
    [34] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems 25, 2012.
    [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., ImageNet Large Scale Visual Recognition Challenge, Int. J. Comput. Vis., 115 (2015), 211-252. doi: 10.1007/s11263-015-0816-y
    [36] V. Nair, G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML10, Omnipress, Madison, WI, USA, 2010, 807814.
    [37] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15 (2014), 1929-1958.
    [38] J. Wang, L. Perez, The effectiveness of data augmentation in image classification using deep learning, Convolutional Neural Networks Vis. Recognit., 11 (2017).
    [39] K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2016, 770-778.
    [40] S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, Proceedings of the 32nd International Conference on Machine Learning, ICML 2015.
    [41] M. I. Jordan, Attractor dynamics and parallelism in a connectionist sequential machine, Artificial neural networks: Concept learning. 1990, 112-127.
    [42] J. L. Elman, Finding Structure in Time, Cognit. Sci., 14 (1990), 179-211.
    [43] S. Hochreiter, Untersuchungen zu dynamischen neuronalen Netzen, Master's thesis, Institut für Informatik, Technische Universität, Munchen, 1991.
    [44] Y. Bengio, P. Simard, P. Frasconi, Learning Long-Term Dependencies with Gradient Descent is Difficult, IEEE Trans. Neural Networks, 5 (1994), 157-166.
    [45] S. Hochreiter, J. Schmidhuber, Long Short-Term Memory, Neural Comput., 9 (1997), 1735-1780.
    [46] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, et al., Learning phrase representations using RNN encoder-decoder for statistical machine translation, arXiv preprint arXiv:1406.1078, 1724-1734.
    [47] T. Dozat, Incorporating nesterov momentum into adam, in International Conference on Learning Representations, ICLR, 2016.
    [48] L. N. Smith, Cyclical Learning Rates for Training Neural Networks, 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2017, 464-472.
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4868) PDF downloads(329) Cited by(2)

Article outline

Figures and Tables

Figures(11)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog