Current research projects in the ARM lab.
Improving Robotic Assistant Dexterity
As robotic assistants operate in complex, unstructured human-centered environments it becomes essential that robots
- Have access to as much useful data as possible as they interact with the environment
- Be able to leverage that data using intelligent models of the environment
Challenges for long term deployable robotic systems are
- Being able to handle uncertainty in a known environment
- Being able to reason about the properties of a new object/element in their environment
- Recovering from failure to complete a task/objective
By equiping robots to handle these challenges, we can develop robotic platforms that are capable of sustained, reliable, long-term service deployment.
Human-Robot Cooperative Transport
As robots become more capable assistants, it is important that they be able to collaborate leveraging implicit communication and situational awareness. This project of human-robot cooperative transport exemplifies a scenario where the robot serves as a valuable teammate but it is untenable for the human to issue constant explicit commands. The robot instead must be able to observe the human as well as the environment and predict where the human is trying to go. In a centralized system (all robots), a single governing controller would specify how every agent should move to transport the object to the goal location and 'waste' as little energy as possible compressing or stretching the object during transport. This compressing or stretching can be characterized as interaction forces (forces that don't contribute to motion), and minimizing these is often considered a metric for efficient transport. Therefore, the goal is to have the robot leverage its knowledge of the human and environment to transport efficiently.
Collaborative Robotic Assistant
As robots become more autonomous, they may functionaly have the ability to become good collaborators. However, when there is a new, complex task it may still take the robot a while to understand what to do. And when a task that requires teamming is required, how does the robot determine it's role with considerations for team performance? This project begins to answer both of these questions through considerations for efficient learning from demonstration and efficient collaboration through teammate prediction.
Intelligent Prosthetic Arm (IPArm)
Prosthetic arm users often struggle to control their prosthetics, and accuracy is a skill that is developed over a long time period if it is ever achieved. Can a robotic prosthetic leverage situational awareness and prediction to decrease the learning curve and improve accuracy?
Smart Belt: Human Motion Prediction from Wearable Sensor
Dynamic bipedal walking is challenging to model as well as replicate on robotic platforms. Most exoskeletons use body sensors (surface EMG’s, subdermal, etc.) to predict leg motion, but the observation of the environment can reduce the need for invasive sensors and provide a longer prediction horizon.
ASL Digital Interpreter for Mobile Deployment
The goal of this project is to help bring American Sign Language (ASL) and non-ASL speakers together through a translation mobile application. Unlike traditional NLP, ASL communication requires visual observation of the subject. In this project, the first objective is to use state of the art methods to develop the appropriate architecture and technique to map recorded ASL video to English sentences with considerations for what is required for mobile deployment. The second objective is to work with the ASL community to in dataset aggregation along with employing methods of efficient learning (through techniques like active learning for ML).
Leveraging Human Intent for Shared Autonomy
As vehicles become more autonomous, it is imperative that mutual transition of control between the vehicle and driver be both safe and smooth. The question is: "When is control transition allowable"? If control can be transitioned at a non-zero speed, then this handover should be safe.