Autonomous and anthropomorphic robots are poised to play a critical role in manufacturing, healthcare and the services industry in the near future. However, for this vision to become a reality, robots need to efficiently communicate and physically interact with their human partners. Rather than traditional remote controls and programming languages, adaptive and transparent techniques for human-robot collaboration are needed. In particular, robots may need to interpret implicit behavioral cues or explicit instructions and, in turn, generate appropriate responses. In this talk, I will present ongoing work which leverages machine learning (ML), natural language processing and virtual reality to create different modalities for humans and machines to engage in effortless and natural interactions. To this end, I will describe Bayesian Interaction Primitives - an approach for motor skill learning and spatio-temporal modelling in physical human-robot collaboration tasks. Further, I will discuss our recent work on language-conditioned imitation learning and self-supervised learning in interactive tasks. The talk will also cover techniques that enable robots to communicate information back to the human partner via mixed reality projections. To demonstrate these techniques, I will present applications in prosthetics, social robotics, and collaborative assembly.
This lecture satisfies requirements for CSCI 591: Research Colloquium.