Morality and Social Trust in Autonomous Robots


Organizers: Morteza Lahijanian, Mária Svoreňová, Nisar Ahmed, Patrick Lin, Marta Kwiatkowska

Website: http://qav.cs.ox.ac.uk/autonomy_morality_trust/

Robots are becoming members of our society. Complex algorithms have been making robots increasingly sophisticated machines with rising levels of autonomy, enabling them to leave behind their traditional work places in factories and to enter our society with convoluted social rules, relationships, and expectations. Driverless cars, home assistive robots, and unmanned aerial vehicles are just a few examples. As the level of involvement of such systems increases in our daily lives, their decisions affect us more directly. Therefore, we instinctively expect robots to behave morally and make ethical decisions. For instance, we expect a firefighter robot to follow ethical principles when faced with a choice of saving one's life over another's in a rescue mission, and we expect an eldercare robot to take a moral stance in following its owner's instructions when they are in conflict with the interest of others. Such expectations give rise to the notion of trust in human-robot relationship and to questions such as "how can I trust a driverless car to take my child to school?" In order to design algorithms that can generate morally-aware and ethical decisions and hence creating trustworthy robots, we need to understand the conceptual theory of morality in machine autonomy in addition to understanding, formalizing, and expressing trust itself. This is a tremendously challenging (yet necessary) task involving many aspects including philosophy, sociology, psychology, cognitive reasoning, logic, and computation. In this workshop, we aim to continue the discussions initiated in our RSS 2016 workshop on "Social Trust in Autonomous Robots" with the additional theme of ethics and morality to shed light on these multifaceted notions from various perspectives through a series of talks and panel discussions.