HAND GESTURE USING CONVOLUTIONAL NEURAL NETWORKS

Authors

  • Mr.K. Obulesh, Author
  • P. Shraddha Sree Author
  • U. Anusha Author
  • V. Rishitha Author

Keywords:

convolutional neural network (CNN), spatial-temporal characteristics, Sign Language Recognition (SLR)

Abstract

The goal of Sign Language Recognition (SLR) 
is to enable deaf-mute individuals to 
communicate with the general public by 
translating sign language into text or voice. 
Despite the wide-ranging societal effects, the 
intricacy and wide-ranging hand motions make 
this work very difficult. Current state-of-the-art 
SLR approaches construct classification models 
using manually-crafted characteristics that 
characterize motion in sign language. 
Nevertheless, trustworthy features that can 
adjust to the wide variety of hand movements 
are challenging to build. In order to tackle this 
issue, we present a new convolutional neural 
network (CNN) that can automatically, and 
without human intervention, extract 
discriminative spatial-temporal characteristics 
from unprocessed video streams. Convolutional 
neural networks (CNNs) are trained to improve 
performance by feeding them multi-channel 
video feeds that include color information, depth 
clues, and the locations of the body's joints. By 
comparing it to more conventional methods that 
rely on manually created features, we show that 
the suggested model outperforms the former on 
a real-world dataset acquired using Microsoft 
Kinect.

Downloads

Published

04-07-2024

How to Cite

HAND GESTURE USING CONVOLUTIONAL NEURAL NETWORKS . (2024). International Journal of Mechanical Engineering Research and Technology , 16(9), 157-164. https://ijmert.com/index.php/ijmert/article/view/252