People tend to overtrust sophisticated computing devices, including robotic systems. As these systems become more fully interactive with humans during the performance of day-to-day activities, the role of bias in these human-robot interaction scenarios must be more carefully investigated. Bias is a feature of human life that is intertwined, or used interchangeably, with many different names and labels -“ stereotypes, prejudice, implicit or subconsciously held beliefs. In the digital age, this bias has often been encoded in and can manifest itself through AI algorithms, which humans then take guidance from, resulting in the phenomenon of excessive trust. Trust conveys the concept that when interacting with intelligent systems, humans tend to exhibit similar behaviors as when interacting with other humans; thus, the concern is that people may under-appreciate or misunderstand the risk associated with handing over decisions to an intelligent agent. Bias further impacts this potential risk for trust, or overtrust, in that these systems are learning by mimicking our own thinking processes, inheriting our own implicit biases. Consequently, the propensity for trust and the potential of bias may have a direct impact on the overall quality of the interaction between humans and machines, whether the interaction is in the domains of healthcare, job-placement, or other high-impact life scenarios. In this talk, we will discuss this phenomenon of integrated trust and bias through the lens of intelligent systems that interact with people in scenarios that are realizable in the near-term.
This lecture satisfies requirements for CSCI 591: Research Colloquium.