Skip to main content Skip to secondary navigation

Research

Main content start
ARMLab logo

Starts at 16:00

Current research projects in the ARMLab. 

The broad research objective of the Assistive Robotics and Manipulation Lab is to develop technology that improves everyday life by anticipating and acting on the needs of human counterparts. We specialize in developing intelligent robotic systems that can perceive and model environments, humans and tasks and leverage these models to predict system processes and understand their assistive role. The research can be divided into the following sub-categories: robotic assistants, connected devices and intelligent wearables.  We use a combination of tools in dynamical systems analysis, control theory (classical, non-linear and robust control), state estimation and prediction, motion planning, vision for robotic autonomy and machine learning. Our lab focuses heavily on both the analytical and experimental components of assistive technology design. While our application area domain is autonomous assistive technology, our primary focus is robotic assistants (mobile manipulators and humanoids) with the goal of deployment for service tasks that may be highly dynamic and require dexterity, situational awareness, and human-robot collaboration.

Improving Robotic Assistant Dexterity

As robotic assistants operate in complex, unstructured human-centered environments it becomes essential that robots 

1. Have access to as much useful data as possible as they interact with the environment

2. Be able to leverage that data using intelligent models of the environment 

Challenges for long term deployable robotic systems are 

- Being able to handle uncertainty in a known environment

- Being able to reason about the properties of a new object/element in their environment 

- Recovering from failure to complete a task/objective

By equipping robots to handle these challenges, we can develop robotic platforms that are capable of sustained, reliable, long-term service deployment.

human robot collaboration

Human-Robot Cooperative Transport

As robots become more capable assistants, it is important that they be able to collaborate leveraging implicit communication and situational awareness. This project of human-robot cooperative transport exemplifies a scenario where the robot serves as a valuable teammate but it is untenable for the human to issue constant explicit commands. The robot instead must be able to observe the human as well as the environment and predict where the human is trying to go.  In a centralized system (all robots), a single governing controller would specify how every agent should move to transport the object to the goal location and 'waste' as little energy as possible compressing or stretching the object during transport. This compressing or stretching can be characterized as interaction forces (forces that don't contribute to motion), and minimizing these is often considered a metric for efficient transport. Therefore, the goal is to have the robot leverage its knowledge of the human and environment to transport efficiently. 

intelligent prosthesis example

Intelligent Prosthetic Arm (IPArm)

Prosthetic arm users often struggle to control their prosthesis, and accuracy is a skill that is developed over a long time period if it is ever achieved. Can a robotic prosthetic leverage situational awareness and prediction to decrease the learning curve and improve accuracy?

human robot collaboration

Collaborative Robotic Assistant

As robots become more autonomous, they may functionaly have the ability to become good collaborators. However, when there is a new, complex task it may still take the robot a while to understand what to do. And when a task that requires teamming is required, how does the robot determine it's role with considerations for team performance? This project begins to answer both of these questions through considerations for efficient learning from demonstration and efficient collaboration through teammate prediction.

fall prevention sensor

Smart Belt: Human Motion Prediction and Fall Prevention from Wearable Sensor

Dynamic bipedal walking is challenging to model as well as replicate on robotic platforms. Most exoskeletons use body sensors (surface EMG’s, subdermal, etc.) to predict leg motion, but the observation of the environment can reduce the need for invasive sensors and provide a longer prediction horizon.

shared autonomy in vehicles

Leveraging Human Intent for Shared Autonomy

As vehicles become more autonomous, it is imperative that mutual transition of control between the vehicle and driver be both safe and smooth. The question is: "When is control transition allowable"? If control can be transitioned at a non-zero speed, then this handover should be safe.