Communication is the way to convey message to others. It is a basic need of every individual. Sign language is a major way of communication used by mute people due to their lack of ability to speak. Normal people unable to understand the sign language as they communicate through voice. To overcome the communication gap between mute and normal people this project develops a system that converts hand gestures of mute people into voice. There are various methods that have been developed to recognize the sign language. Majority of them are based on computer vision techniques by using a camera and mounting sensors on hand to track and recognize the movement of hands.
In this project a glove is designed to capture hand and finger movements of mute people to recognize the Pakistan Sign Language (PSL) by using a wearable technology. The PSL signs are performed by mute people and the glove converts them into voice by processing the output of sensors with machine learning techniques. In this project thirteen different PSL signs are recognized and one gesture to switch output voice either in Urdu or English language. The glove consists of five flex sensors which are mounted on fingers with five flex sensors to record the motion of fingers and one Inertial Measurement Unit (IMU) sensor to record three-dimensional (3-D) coordinates of hand in the free space. These two sensors collect data for different gesture/sign and then process with Raspberry Pi for gesture recognition.
Three hundred samples per PSL category are collected to train the machine learning techniques. These machine learning techniques are K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). After training the testing is performed real time. The accuracy achieved with KNN is 95.7%, with Random forest it is 95.2% and with SVM it is 97.6%. The experimental results show that the glove is portable, user-friendly and achieves good and real time performance.