A ’no-touch touchscreen’ developed for use in cars could also have widespread applications in a post-COVID-19 world, by reducing the risk of transmission of pathogens on surfaces.
Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running
The patented technology, known as ’predictive touch’, was developed by engineers at the University of Cambridge as part of a research collaboration with Jaguar Land Rover. It uses a combination of artificial intelligence and sensor technology to predict a user’s intended target on touchscreens and other interactive displays or control panels, selecting the correct item before the user’s hand reaches the display.
The technology uses machine intelligence to determine the item the user intends to select on the screen early in the pointing task, speeding up the interaction. It uses a gesture tracker, including vision-based or RF-based sensors, which are increasingly common in consumer electronics; contextual information such as user profile, interface design, environmental conditions; and data available from other sensors, such as an eye-gaze tracker, to infer the user’s intent in real time.
"This technology also offers us the chance to make vehicles safer by reducing the cognitive load on drivers and increasing the amount of time they can spend focused on the road ahead. This is a key part of our Destination Zero journey," said Lee Skrypchuk, Human Machine Interface Technical Specialist at Jaguar Land Rover.
It could also be used for displays that do not have a physical surface such as 2D or 3D projections or holograms. Additionally, it promotes inclusive design practices and offers additional design flexibilities, since the interface functionality can be seamlessly personalised for given users and the display size or location is no longer constrained by the user ability to reach-touch.
"Our technology has numerous advantages over more basic mid-air interaction techniques or conventional gesture recognition, because it supports intuitive interactions with legacy interface designs and doesn’t require any learning on the part of the user," said Dr Bashar Ahmad, who led the development of the technology and the underlying algorithms with Professor Godsill. "It fundamentally relies on the system to predict what the user intends and can be incorporated into both new and existing touchscreens and other interactive display technologies."
This software-based solution for contactless interactions has reached high technology readiness levels and can be seamlessly integrated into existing touchscreens and interactive displays, so long as the correct sensory data is available to support the machine learning algorithm.
Unexpected experiences: Richard Gilbertson discusses how lockdown has enabled him to kick-start a new initiativeSign up to receive our weekly research email Our selection of the week’s biggest Cambridge research news and features direct to your inbox from the University. Enter your name and email address below and select ’Subscribe’ to sign up.
The University of Cambridge will use your name and email address to send you our weekly research news email. We are committed to protecting your personal information and being transparent about what information we hold. Please read our email privacy notice for details.