Computer Science

Real-Time Sign Language Recognition Based on Motion Gesture Identification Using Wireless Body Area Sensors

Publié le - Thomas S. Clary

Auteurs : Osman Salem, Philippe Ea, Ahmed Mehaoua, Raouf Boutaba

To improve the communication between deaf and hearing individuals using handheld devices, we propose a lightweight approach for the quick identification of words in American Sign Language (ASL). The main idea is to convert hand gestures of ASL into words displayed on the SmartPhone. We used two Myo armbands on both hands to acquire inertial data and muscular activities during movements. We paired the Myo with a smart screen (SmartPhone or tablet) that receives, aggregates, classifies the data, and translates the Myo signals into the corresponding ASL words, which is displayed as readable text (or voice) and can be understood by normal people who cannot understand ASL. Deaf uses sign language for everyday communication. Therefore, it would be interesting to assign and translate such gestures to text or speak in mobile applications. This will allow the mapping of these gestures to general input for interactive applications, as well as to establish a simultaneous interpretation system between spoken languages and sign language. To reduce processing complexity and memory usage in portable devices, we propose an aggregation technique on the acquired data to reduce the dimensionality. The extracted features from the aggregated data were then fed into several classifiers, namely Support Vector Machine (SVM), Random Forest (RF), and Decision Tree (DT) to identify the associated word. Our experimental results demonstrate that our data aggregation approach significantly reduces the processing time with similar recognition accuracy compared to SVM, RF, and DT classifiers without aggregation. The conducted experiments and the performance analysis reveal that our proposed approach achieves faster processing times when compared to existing works and has a recognition accuracy of 98%.