Have you ever struggled to communicate with someone who uses American Sign Language (ASL)?
Well, imagine no more! A third-year engineering student from India, Priyanjali Gupta, has created an AI model that can detect ASL and translate it into English in real-time.
In Action: The AI-Based ASL Detector
In a demo video shared on her LinkedIn profile, Gupta showcased the AI-based ASL Detector in action. The model can currently detect and translate a few words and phrases such as Hello, Please, Thanks, I Love You, Yes, and No.
The Science Behind It
Gupta developed the model by utilising the Tensorflow object identification API and transfer learning via the previously trained ssd mobilenet model. This means she was able to repurpose existing codes to fit her ASL Detector model. However, it is worth noting that the AI model does not actually translate ASL to English. Instead, it uses pre-programmed objects in its database to identify an object, in this case, the signs, and then calculate how similar it is.
Gupta’s inspiration for creating such an AI model came from her mother nagging her “to do something” after joining her engineering course in VIT. She also credited YouTuber and data scientist Nicholas Renotte’s video from 2020, which details the development of an AI-based ASL Detector.
A Work in Progress
Although Gupta’s post on LinkedIn garnered numerous positive responses and appreciation from the community, an AI-vision engineer pointed out that the transfer learning method used in her model is “trained by other experts” and it is the “easiest thing to do in AI.” Gupta acknowledged this and wrote that building “a deep learning model solely for sign detection is a really hard problem but not impossible.”
“Currently I’m just an amateur student but I am learning and I believe sooner or later our open-source community, which is much more experienced and learned than me, will find a solution and maybe we can have deep learning models solely for sign languages,” she further added.
Check out Priyanjali’s GitHub page to know more about the AI model and access the relevant resources of the project. Let us know your thoughts about Gupta’s ASL Detector in the comments below!
Q: What is the AI-based ASL Detector?
A: The AI-based ASL Detector is an AI model created by Indian student Priyanjali Gupta that can detect American Sign Language and translate it into English in real-time.
Q: Who created the AI-based ASL Detector?
A: Priyanjali Gupta, a third-year engineering student from India, created the AI-based ASL Detector.
Q: How does the AI-based ASL Detector work?
A: The model was created by leveraging Tensorflow object detection API and using transfer learning through a pre-trained model called ssd_mobilenet. The model identifies an object, in this case, the signs, and then determines how similar it is based on pre-programmed objects in its database.
Q: What are the limitations of the AI-based ASL Detector?
A: The AI-based ASL Detector currently only supports a few words and phrases and is not a true translation of ASL to English.
Q: What inspired Priyanjali Gupta to create the AI-based ASL Detector?
A: Gupta’s mother inspired her to “do something” after joining her engineering course in VIT. She also credited YouTuber and data scientist Nicholas Renotte’s video from 2020 which details the development of an AI-based ASL Detector.
Q: Can I access the resources of the AI-based ASL Detector project?
A: Yes, you can check out Priyanjali’s GitHub page to know more about the AI model and access the relevant resources of the project.