Skip to main content Skip to secondary navigation

American Sign Language Digital Interpreter for Mobile Deployment

Main content start

The goal of this project is to help bring American Sign Language (ASL) and non-ASL speakers together through a translation mobile application. Unlike traditional NLP, ASL communication requires visual observation of the subject. In this project, the first objective is to use state-of-the-art methods to develop the appropriate architecture and technique to map recorded ASL video to English sentences with considerations for what is required for mobile deployment. The second objective is to work with the ASL community to in dataset aggregation along with employing methods of efficient learning (through techniques like active learning for ML). 

ASL project concept
ASL Problem

Research Objectives

  • Develop a modular architecture that utilizes refined feature selection for interpretation, and scale solution for mobile deployment
  • Develop community informed dataset that then leverages active learning in NLP

Current Students/Researchers