Would you trust a robot surgeon? What about a robot pilot, shop assistant or emergency responder? Would you trust them if they had the ability to adapt and change how they functioned? What would it take to make them trustworthy? These are some of the questions driving a team of social scientists, ethicists, computer scientists and engineers at the University of Bristol
Funded by a £3m grant from UK Research and Innovation (UKRI) , the team are pooling their expertise to explore how autonomous systems - decision-making machines that act independently of a human controller - could function in a safe, secure and resilient manner. November sees them start a three-and-a-half-year project that will focus on the processes used to design and develop the evolving functionality of autonomous systems.
Ultimately, the findings could influence the development of technologies designed to assist modern life, from boosting productivity of industry, through emergency response systems, to robotic surgery.
Professor Kerstin Eder , Head of the Trustworthy Systems Laboratory and lead of the research theme on Verification and Validation for Safety in Robots at the Bristol Robotics Lab , is one of the six researchers involved. She said: “This kind of technology is already all around us, from traction control systems in cars, to the AI assistant in mobile phones and computers. Some of these systems are very predictable and will only do things they are programmed to do, but this means they cannot adapt and change to a changing environment. Others are able to adapt in their function, responding in real time to changes in the environment or the needs of the user, going beyond their initial setup. This can make them more useful, but also less predictable.’
This raises important questions about safety, responsibility and trust, as Dr Jonathan Ives , Deputy Director of Bristol’s Centre for Ethics in Medicine , another member of the team, explained: “We learn to trust technology when it is predictable. But it may be difficult to learn to trust technology whose function is changing. How can we come to trust a system that is unpredictable by design?
The UKRI funding is part of a nationwide project involving six separate groups connected to a central hub. Each of the six groups, or ‘nodes’, will explore a different facet of autonomous systems: trust, resilience, security, functionality, verifiability, and governance and regulation.
The Bristol team will focus on functionality; creating processes to develop technologies with the ability to adapt their functionality to real world conditions. The team will focus on three technologies that adapt in fundamentally different ways: swarm systems (the collective efforts of multiple robots to solve a problem, rather like a swarm of ants or bees), soft robotics (flexible, compliant components akin to biological organisms) and unmanned air vehicles (UAVs, otherwise known as drones). Looking at such diverse range of technological challenges will enable their findings to have wide ranging implications and applications.
Dr Shane Windsor , who leads the project and is an expert in bio-inspired flight dynamics and control from the Department of Aerospace Engineering , said: “With conventional systems, once they leave the factory, we know what they are going to do because they have a set specification and we have set standards they have to meet. For autonomous systems, with the ability to adapt their functionality in response to changes in their environment, we need an adaptive approach that responds to changes in the system’s performance or in the environment in which it operates.
“Our primary focus is on investigating how we can create processes that will build trust in these systems, rather than just building the technologies themselves. We need to be confident that we have considered all aspects of what is needed to make them work effectively in the real world and doing this requires we start at the beginning of the development process.’
One of the major strengths of the project is the application of an “action research? model - an integrated approach that will see experts from multiple fields involved in a cyclical feedback loop, creating opportunities for the project, the people and the processes to evolve their performance at every step.
This approach will further enhance the team’s scope to test the technical, social and ethical implications through planned public engagement activities and through building a network of stakeholders and partners who will ultimately be the developers and users of future trustworthy autonomous systems.