Collaborative Robotic Assistant
As robots become more autonomous, they may functionaly have the ability to become good collaborators. However, when there is a new, complex task it may still take the robot a while to understand what to do. And when a task that requires teamming is required, how does the robot determine it's role with considerations for team performance? This project begins to answer both of these questions through considerations for efficient learning from demonstration and efficient collaboration through teammate prediction.
Applications of this work range from first responders who may be at risk when they perform their duties in unstable environments, to employees in a factory with robotic teammates, to a home-assisting robot. In the case of first-responders, robotic collaborators can reduce the number of at risk personnel while also being capable of performing complex collaborative tasks.
So questions we plan to answer in this work are:
- Can we improve the efficiency of learning from demonstration with mixed reality to improve feature identification/selection and active learning (querying the demonstrator)?
- Given a collaborative task, can the robot leverage behavioral cloning to predict actions of teammates in order to take action that will improve overall team efficiency?
Efficient Learning from Demonstration
Over the past few decades the capabilities of robots have increased dramatically, there has been steady progress from automation (performing well-conditioned, specified tasks with precision and accuracy) to autonomy (performing tasks in unstructured environments). Key attributes of an autonomous system is the ability to adequately perceive the environment as well as strategically plan actions that will allow the system/task to reach the desired state. As a task becomes less structured, a lot of traditional assumptions are invalid or limit the ability of the robot to exhibit the target behavior. Humans are quite capable of performing dexterous task such as picking and placing of objects in cluttered environments. Humans primarily utilize tactile as well as visual perception heavily in these task and plan for both quasi-static and dynamic systems. Learning from demonstration (LfD) is the process of observation and transference of a skill a human has mastered to a robot. Traditionally these approaches require a lot of training examples in order for the robot to ascertain the underlying principles in task performance. One open question is how LfD can be made more efficient while ensuring the learned skills are generalized to a wider set of tasks than those in demonstration. Augmented reality (AR) provides a unique opportunity to both observe from a first-person perspective the actions of a human, as well as query and communicate with the human demonstrator to ensure both the relevant state and efficient planning approaches are captured by the robotic observer.
Efficient Collaboration through Teammate Prediction
When the desired task has been communicated to the robot, the robot can serve as a collaborator working alongside the human. For long, sequential tasks the human may take sub-optimal actions, if the robot can learn the human's behavior then the robot can take strategic actions that improve overall efficiency of the human-robot team.
- Research objectives include
- Increase efficiency of Learning from Demonstration by
- Making feature selection intuitive (easy for a novice to use)
- Making active learning intuitive
- Make robotic collaborators more efficient by modeling all other teammates (behavioral cloning), and using these models to determine what action should be taken to increase the efficiency of the entire team.