Human-Robot Cooperative Transport

Project Motivation
As robots become more capable assistants, it is important that they be able to collaborate leveraging implicit communication and situational awareness. This project of human-robot cooperative transport exemplifies a scenario where the robot serves as a valuable teammate but it is untenable for the human to issue constant explicit commands. The robot instead must be able to observe the human as well as the environment and predict where the human is trying to go. In a centralized system (all robots), a single governing controller would specify how every agent should move to transport the object to the goal location and 'waste' as little energy as possible compressing or stretching the object during transport. This compressing or stretching can be characterized as interaction forces (forces that don't contribute to motion), and minimizing these is often considered a metric for efficient transport. Therefore, the goal is to have the robot leverage its knowledge of the human and environment to transport efficiently.
Research Objectives
Research objectives include:
- Develop robotic platform capable of modeling humans intended motion from measured quantity
- Utilize human observation and the surrounding environment to predict the desired motion
- Understand how to grasp/re-grasp objects with the human leader for successful placement
Current Students
Related Publications
-
Diffusion Co-Policy for Synergistic Human-Robot Collaborative Tasks
-
It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
-
Paper Website: https://sites.google.com/view/cooperative-carrying
-
-
Modeling and Control for Robotic Assistants: Single and Multi-robot Manipulation
-
Replay Overshooting: Learning Stochastic Latent Dynamics with the Extended Kalman Filter
Active User Studies
Cooperative table-carrying is a complex task due to the continuous nature of the action and state-spaces, multimodality of strategies, and the need for instantaneous adaptation to other agents. In this work, we present a method for predicting realistic motion plans for cooperative human-robot teams on the task. Using a Variational Recurrent Neural Network (VRNN) to model the variation in the trajectory of a human-robot team across time, we are able to capture the distribution over the team’s future states while leveraging information from interaction history. The key to our approach is leveraging human demonstration data to generate trajectories that synergize well with humans during test time in a receding horizon fashion. Comparison between a baseline, sampling-based planner RRT (Rapidly-exploring Random Trees) and the VRNN planner in centralized planning shows that the VRNN generates motion more similar to the distribution of human-human demonstrations than the RRT. Results in a human-in-the-loop user study show that the VRNN planner outperforms decentralized RRT on task-related metrics, and is significantly more likely to be perceived as human than the RRT planner. Finally, we demonstrate the VRNN planner on a real robot paired with a human teleoperating with another robot.
Preliminary Work: The intended pose of the human leader is estimated implicitly by observing the human's head direction, the applied wrench through the rigid carried object, the human steps, the twist of the carried object and the surrounding obstacles. Each of the above quantities can be intuitively related to the expected pose by an integrator (e.g. twist integrates to pose, wrench integrated twice predicts pose) with surrounding obstacles being constraints. However the combination of all of these terms is non-trivial and is the main contribution of this work.
SHOW LESS