Robotic Intent Estimation, Wearables, Teleoperation and Learning from Demonstration

ARMLab performs research in robot’s estimating teammate intent, wearable robotic platforms and efficient robot teleoperation. Projects include improving upper-limb prosthesis, improving teleoperation and learning from demonstration by leveraging advances in augmented reality, and developing a wearable sensor capable of predicting the wearers path for fall prevention and exoskeleton enhancement.

Research Topics


Intelligent Prosthetic Arm (IPArm)

Description:

One key challenge in upper limb prosthetics is controlling the many degrees of freedom required for dexterous tasks, especially as functionality approaches that of a natural arm. Limited user input, combined with complex control demands, calls for a solution that interprets human intent and enables the prosthesis to perform tasks autonomously. The IPARM project aims to provide prosthetics and assitive arms with autonomy that seamlessly integrates with the user’s intentions, allowing for precise, dexterous movements.

Our research explores whether robotic prosthetics and assistive arms can use situational awareness and predictive modeling to reduce the learning curve and improve accuracy. We investigate how inputs like EMG signals and gaze tracking can determine desired actions and how natural arm motions and task affordances can confirm user intent.

Research Objectives:

  1. Reduce the time required for users to achieve full dexterity through intent estimation.
  2. Enhance overall dexterity by integrating situational awareness for complex tasks.

This project seeks to make upper-limb prosthetics and assistive arms more intuitive and effective, empowering users to perform complex tasks with ease.

Touch-GS mirror visualization

Papers and Videos:


Efficient Teleoperation and Learning from Demonstration

Description:

Human demonstrators often have expert knoweldge on how to perform a manipulation task, but transfering this knoweldge to a robot can be tedious, especially for demonstrators who are not accustomed to working with robots. In this work, we leverage Augmented Reality (AR) to provide efficient methods of teaching a robot a new task and performing teleoperation.

Touch-GS mirror visualization

Papers and Videos:


Wearable Motion Prediction: Path Prediction, Fall Prevention and Exoskeleton Enhancment

Description:

Humans who experience challenges with balance or who wear exoskeletons may have their locomotion or balance challenged by obstacles and variability of their path. A wearable sensor that can observe past motion and surrounding obstacles can use data-driven approaches to approximate the path(s) the wearer might take and anticipate challenges in terrain or balance the wearer may experience.

In this project we developed a wearable robotic system that consists of multiple cameras and computation, and is capable of observing the past motion and surrounding obstacles and terrain and using this input to predict possible paths the human may take and anticipating challenges to their locomotion. This system can be interfaced with an exoskeleton to provide anticipatory control conditioned on the surrounding terrain and objects which may influence expected motion.

Touch-GS mirror visualization

Papers and Videos: