Sign Language Recognition and Hand Gestures Review
Kerbala Journal for Engineering Science,
2022, Volume 2, Issue 4, Pages 192-316
AbstractDeaf people use movements and physical expressions to reveal their ideas and feelings to their world. These expressions are called ‘sign language’, and like natural languages, there are many forms of signs worldwide. Deaf use one or two hands and sometimes use other body parts like the head, lips or eyes. Their gestures are by either static or dynamic hands, and they are a bit of complicated language. Therefore, other people need to understand the meaning of each of these signs and gestures to communicate with the Deaf community successfully. Human-computer interaction is an effective tool and an excellent trend to facilitate the communication and comprehension of the different sign languages used worldwide. The research community has tried to review the most important techniques and models used in deciphering and understanding sign languages. Every new research effort is directed towards improving these ways of communication. Some proposed models dealt with isolated signs, and others focused on continuous signs. This article represents a summary of multiple comprehensive reviews that studied different literature conducted on sign language recognition. The discussion in this review focuses on the systems and approaches that only deal with static hand gesture recognition. This work aims to provide a guide for researchers and practitioners to relate their work to existing research and gain insights into what their work can contribute to the field.
 K. P. Nimisha and A. Jacob, “A Brief Review of the Recent Trends in Sign Language Recognition,” Proc. 2020 IEEE Int. Conf. Commun. Signal Process. ICCSP 2020, pp. 186–190, 2020, doi: 10.1109/ICCSP48568.2020.9182351.
 A. Ghotkar, “Study of Vision Based Hand Gesture Recognition Using,” vol. 7, no. 1, pp. 96–115, 2014.
 M. A. Almasre and H. Al-Nuaim, “A comparison of Arabic sign language dynamic gesture recognition models,” Heliyon, vol. 6, no. 3, p. e03554, 2020, doi: 10.1016/j.heliyon.2020.e03554.
 V. Bheda and D. Radpour, “Using Deep Convolutional Networks for Gesture Recognition in American Sign Language,” 2017, [Online]. Available: http://arxiv.org/abs/1710.06836.
 M. Sanzidul Islam, S. Sultana Sharmin Mousumi, N. A. Jessan, A. Shahariar Azad Rabby, and S. Akhter Hossain, “Ishara-Lipi: The First Complete MultipurposeOpen Access Dataset of Isolated Characters for Bangla Sign Language,” 2018 Int. Conf. Bangla Speech Lang. Process. ICBSLP 2018, no. September, pp. 1–4, 2018, doi: 10.1109/ICBSLP.2018.8554466.
 R. S. Sabeenian, S. Sai Bharathwaj, and M. Mohamed Aadhil, “Sign language recognition using deep learning and computer vision,” J. Adv. Res. Dyn. Control Syst., vol. 12, no. 5 Special Issue, pp. 964–968, 2020, doi: 10.5373/JARDCS/V12SP5/20201842.
 U. M. Butt et al., “Feature based algorithmic analysis on American sign language dataset,” Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 5, pp. 583–589, 2019, doi: 10.14569/ijacsa.2019.0100575.
 J. Ekbote and M. Joshi, “A Survey on Hand Gesture Recognition for Indian Sign Language,” pp. 1039–1044, 2016.
 H. Mahmud, M. M. Morshed, and M. K. Hasan, “A Deep Learning-based Multimodal Depth-Aware Dynamic Hand Gesture Recognition System,” 2021, [Online]. Available: http://arxiv.org/abs/2107.02543.
 S. Nadgeri, “An Analytical study of signs used in Baby Sign Language using MobileNet Framework.”
 M. A. Ahmed, B. B. Zaidan, A. A. Zaidan, M. M. Salih, and M. M. Bin Lakulu, “A review on systems-based sensory gloves for sign language recognition state of the art between 2007 and 2017,” Sensors (Switzerland), vol. 18, no. 7, 2018, doi: 10.3390/s18072208.
 M. J. Cheok, Z. Omar, and M. H. Jaward, “A review of hand gesture and sign language recognition techniques,” Int. J. Mach. Learn. Cybern., vol. 10, no. 1, pp. 131–153, 2019, doi: 10.1007/s13042-017-0705-5.
 “Low-Cost Glove Wirelessly Tran [IMAGE] | EurekAlert! Science News Releases.” [Online]. Available: https://www.eurekalert.org/multimedia/566594.
 M. Oudah, A. Al-Naji, and J. Chahl, “Hand Gesture Recognition Based on Computer Vision: A Review of Techniques,” J. Imaging, vol. 6, no. 8, 2020, doi: 10.3390/JIMAGING6080073.
 R. Elakkiya, “Machine learning based sign language recognition: a review and its research frontier,” J. Ambient Intell. Humaniz. Comput., no. 0123456789, 2020, doi: 10.1007/s12652-020-02396-y.
 R. Van Culver, “A hybrid sign language recognition system,” Proc. - Int. Symp. Wearable Comput. ISWC, pp. 30–33, 2004, doi: 10.1109/iswc.2004.2.
 M. Mohandes, S. Aliyu, and M. Deriche, “Arabic sign language recognition using the leap motion controller,” IEEE Int. Symp. Ind. Electron., no. November 2015, pp. 960–965, 2014, doi: 10.1109/ISIE.2014.6864742.
 T. Raghuveera, R. Deepthi, R. Mangalashri, and R. Akshaya, “A depth-based Indian Sign Language recognition using Microsoft Kinect,” Sadhana - Acad. Proc. Eng. Sci., vol. 45, no. 1, 2020, doi: 10.1007/s12046-019-1250-6.
 J. L. Raheja, A. Mishra, and A. Chaudhary, “Indian sign language recognition using SVM,” Pattern Recognit. Image Anal., vol. 26, no. 2, pp. 434–441, 2016, doi: 10.1134/S1054661816020164.
 M. El Badawy, A. Samir Elons, H. Sheded, and M. F. Tolba, “A proposed hybrid sensor architecture for Arabic sign language recognition,” Adv. Intell. Syst. Comput., vol. 323, pp. 721–730, 2015, doi: 10.1007/978-3-319-11310-4_63.
 Manisha U. Kakde, Mahender G. Nakrani, and Amit M. Rawate, “A Review Paper on Sign Language Recognition System For Deaf And Dumb People using Image Processing,” Int. J. Eng. Res., vol. V5, no. 03, 2016, doi: 10.17577/ijertv5is031036.
 A. Tyagi and S. Bansal, “Feature extraction technique for vision-based indian sign language recognition system: A review,” Adv. Intell. Syst. Comput., vol. 1227, pp. 39–53, 2021, doi: 10.1007/978-981-15-6876-3_4.
 Suharjito, F. Wiryana, G. P. Kusuma, and A. Zahra, “Feature Extraction Methods in Sign Language Recognition System: A Literature Review,” 1st 2018 Indones. Assoc. Pattern Recognit. Int. Conf. Ina. 2018 - Proc., no. September, pp. 11–15, 2019, doi: 10.1109/INAPR.2018.8626857.
 G. Roffo, “Feature Selection Library (MATLAB Toolbox),” 2016, [Online]. Available: http://arxiv.org/abs/1607.01327.
 C. R. Mihalache and B. Apostol, “Hand pose estimation using HOG features from RGB-D data,” 2013 17th Int. Conf. Syst. Theory, Control Comput. ICSTCC 2013; Jt. Conf. SINTES 2013, SACCS 2013, SIMSIS 2013 - Proc., pp. 356–361, 2013, doi: 10.1109/ICSTCC.2013.6688985.
 M. A. Rahaman, M. Jasim, M. H. Ali, and M. Hasanuzzaman, “Real-time computer vision-based Bengali sign language recognition,” 2014 17th Int. Conf. Comput. Inf. Technol. ICCIT 2014, no. May 2019, pp. 192–197, 2014, doi: 10.1109/ICCITechn.2014.7073150.
 C. Dong, M. C. Leu, and Z. Yin, “American Sign Language alphabet recognition using Microsoft Kinect,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2015-Octob, pp. 44–52, 2015, doi: 10.1109/CVPRW.2015.7301347.
 M. Kumar, “Conversion of Sign Language into Text,” Int. J. Appl. Eng. Res., vol. 13, no. 9, pp. 7154–7161, 2018, [Online]. Available: http://www.ripublication.com.
 A. A. Barbhuiya, R. K. Karsh, and R. Jain, “CNN based feature extraction and classification for sign language,” Multimed. Tools Appl., vol. 80, no. 2, pp. 3051–3069, 2021, doi: 10.1007/s11042-020-09829-y.
 R. Rastgoo, K. Kiani, and S. Escalera, “Sign Language Recognition: A Deep Survey,” Expert Syst. Appl., vol. 164, no. February 2020, p. 113794, 2021, doi: 10.1016/j.eswa.2020.113794.
 A. Wadhawan and P. Kumar, “Sign Language Recognition Systems: A Decade Systematic Literature Review,” Arch. Comput. Methods Eng., vol. 28, no. 3, pp. 785–813, 2021, doi: 10.1007/s11831-019-09384-2.
 I. A. Adeyanju, O. O. Bello, and M. A. Adegboye, “Machine learning methods for sign language recognition: A critical review and analysis,” Intell. Syst. with Appl., vol. 12, p. 200056, 2021, doi: 10.1016/j.iswa.2021.200056.
 X. Jiang, S. C. Satapathy, L. Yang, S. H. Wang, and Y. D. Zhang, A Survey on Artificial Intelligence in Chinese Sign Language Recognition, vol. 45, no. 12. Springer Berlin Heidelberg, 2020.
 M. Mustafa, “A study on Arabic sign language recognition for differently abled using advanced machine learning classifiers,” J. Ambient Intell. Humaniz. Comput., vol. 12, no. 3, pp. 4101–4115, 2021, doi: 10.1007/s12652-020-01790-w.
 S. Sharma and S. Singh, “Vision-based sign language recognition system: A Comprehensive Review,” Proc. 5th Int. Conf. Inven. Comput. Technol. ICICT 2020, no. February 2020, pp. 140–144, 2020, doi: 10.1109/ICICT48043.2020.9112409.
 H. Bhavsar and J. Trivedi, “Review on Feature Extraction methods of Image based Sign Language Recognition system,” Indian J. Comput. Sci. Eng., vol. 8, no. 3, pp. 249–259, 2017.
 A. Jaramillo-Yánez, M. E. Benalcázar, and E. Mena-Maldonado, “Real-time hand gesture recognition using surface electromyography and machine learning: A systematic literature review,” Sensors (Switzerland), vol. 20, no. 9, pp. 1–36, 2020, doi: 10.3390/s20092467.
 A. K. Sahoo, G. S. Mishra, and K. K. Ravulakollu, “Sign language recognition: State of the art,” ARPN J. Eng. Appl. Sci., vol. 9, no. 2, pp. 116–134, 2014.
 R. A. Yuvraj Grover, “Sign Language Translation System for Hearing/Speech Impaired People: A Review,” Int. Conf. Inov. Pract. Technol. Manag., pp. 10–14, 2021.
 yu zhang, tian zhang, and ruimei li, “Algorithm of 3D hand posture recognition with space coordinates based on optimal feature selection,” no. August 2018, p. 87, 2018, doi: 10.1117/12.2502939.
 S. C. Bodda, P. Gupta, G. Joshi, and A. Chaturvedi, “A new architecture for hand-worn Sign language to Speech translator,” 2020, [Online]. Available: http://arxiv.org/abs/2009.03988.
 M. M. Chandra, S. Rajkumar, and L. S. Kumar, “Sign Languages to Speech Conversion Prototype using the SVM Classifier,” IEEE Reg. 10 Annu. Int. Conf. Proceedings/TENCON, vol. 2019-Octob, pp. 1803–1807, 2019, doi: 10.1109/TENCON.2019.8929356.
 M. Rinalduzzi et al., “Gesture recognition of sign language alphabet using a magnetic positioning system,” Appl. Sci., vol. 11, no. 12, 2021, doi: 10.3390/app11125594.
 J. Shin, A. Matsuoka, M. A. M. Hasan, and A. Y. Srizon, “American sign language alphabet recognition by extracting feature from hand pose estimation,” Sensors, vol. 21, no. 17, pp. 1–19, 2021, doi: 10.3390/s21175856.
 K. Gomase, A. Dhanawade, P. Gurav, and S. Lokare, “Sign Language Recognition using Mediapipe,” Int. Res. J. Eng. Technol., vol. 9, no. 1, pp. 744–746, 2022, [Online]. Available: https://www.irjet.net/archives/V9/i1/IRJET-V9I1133.pdf.
 B. Bagby, D. Gray, R. Hughes, Z. Langford, and R. Stonner, “Simplifying Sign Language Detection for Smart Home Devices using Google MediaPipe,” 2021, [Online]. Available: https://bradenbagby.com/Portfolio/Resources/PDFs/ResearchPaper.pdf.
 X. Jiang, “Isolated Chinese Sign Language Recognition Using Gray-Level Co-occurrence Matrix and Parameter-Optimized Medium Gaussian Support Vector Machine,” Adv. Intell. Syst. Comput., vol. 1014, no. January, pp. 182–193, 2020, doi: 10.1007/978-981-13-9920-6_19.
 M. Shaheer Mirza, S. Muhammad Munaf, S. Ali, F. Azim, and S. Jawaid Khan, “Vision-based Pakistani Sign Language Recognition Using Bag-of-Words and Support Vector Machines,” 2022, [Online]. Available: https://doi.org/10.21203/rs.3.rs-1204236/v1.
 R. A. Bhuiyan, A. K. Tushar, A. Ashiquzzaman, J. Shin, and M. R. Islam, “Reduction of gesture feature dimension for improving the hand gesture recognition performance of numerical sign language,” 20th Int. Conf. Comput. Inf. Technol. ICCIT 2017, vol. 2018-Janua, no. December 2019, pp. 1–6, 2018, doi: 10.1109/ICCITECHN.2017.8281833.
 N. H. Dardas and N. D. Georganas, “Real-Time Hand Gesture Detection and Recognition.pdf,” vol. 60, no. 11, pp. 3592–3607, 2011.
 R. Alzohairi, R. Alghonaim, W. Alshehri, S. Aloqeely, M. Alzaidan, and O. Bchir, “Image based Arabic Sign Language recognition system,” Int. J. Adv. Comput. Sci. Appl., vol. 9, no. 3, pp. 185–194, 2018, doi: 10.14569/IJACSA.2018.090327.
 I. Mahmud, T. Tabassum, M. P. Uddin, E. Ali, A. M. Nitu, and M. I. Afjal, “Efficient Noise Reduction and HOG Feature Extraction for Sign Language Recognition,” 2018 Int. Conf. Adv. Electr. Electron. Eng. ICAEEE 2018, no. November, pp. 1–4, 2019, doi: 10.1109/ICAEEE.2018.8642983.
 M. M. Islam, S. Siddiqua, and J. Afnan, “Real time Hand Gesture Recognition using different algorithms based on American Sign Language,” 2017 IEEE Int. Conf. Imaging, Vis. Pattern Recognition, icIVPR 2017, 2017, doi: 10.1109/ICIVPR.2017.7890854.
 and S. B. Shubham Kr. Mishra, Sheona Sinha, Sourabh Sinha, “Recognition of Hand Gestures and Conversion of Voice for Betterment of Deaf and Mute People,” Springer Nat. Singapore, vol. 1046, no. July, pp. 46–57, 2019, doi: 10.1007/978-981-13-9942-8.
 M. E. M. Cayamcela and W. Lim, “Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time,” 2019 Int. Conf. Comput. Netw. Commun. ICNC 2019, pp. 100–104, 2019, doi: 10.1109/ICCNC.2019.8685536.
 C. Zimmermann and T. Brox, “Learning to Estimate 3D Hand Pose from Single RGB Images,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 4913–4921, 2017, doi: 10.1109/ICCV.2017.525.
 A. Mujahid et al., “Real-time hand gesture recognition based on deep learning YOLOv3 model,” Appl. Sci., vol. 11, no. 9, 2021, doi: 10.3390/app11094164.
 L. Y. Bin, G. Y. Huann, and L. K. Yun, “Study of Convolutional Neural Network in Recognizing Static American Sign Language,” Proc. 2019 IEEE Int. Conf. Signal Image Process. Appl. ICSIPA 2019, pp. 41–45, 2019, doi: 10.1109/ICSIPA45851.2019.8977767.
 N. Kasukurthi, B. Rokad, S. Bidani, and D. A. Dennisan, “American Sign Language Alphabet Recognition using Deep Learning,” 2019, [Online]. Available: http://arxiv.org/abs/1905.05487.
- Article View: 78
- PDF Download: 92