Robotics are already being used in warehouses, factories, super markets, homes, hazardous sites and other applications. While many issues of stereotypes, disparate impact, and harmful impact of AI have been brought to the surface recently, it is not clear yet how these will manifest in robots.
Specifically, there is still a lack of understanding of the values embedded in robot models, algorithms and benchmarks. These could be biasing both research and development directions, the impact of robotics to different social groups, and whose goals are privileged in the technology.
The goal of this project is thus to: 1) critically analyse robot models, model-based algorithms, benchmarks and tasks to identify the social values, categories, and preferences which are assumed; 2) map out potential issues of disparate safety and disparate impact across social groups; and 3) design methods for promoting safety and ensure robot systems benefit all social groups, with special focus on marginalised and vulnerable groups.
For objectives 1 and 2, the project will use a mix of critical literature review and qualitative studies, together with safety/ergonomics simulation and quantitative analysis of existing methods. For objective 3, the project will explore the use of Responsible Research and Innovation methodologies (e.g. participatory studies) together with technical tools such as design optimisation and robust optimisation. The project will fall under the scope of both robotics and AI ethics.