EPSRC logo

Details of Grant 

EPSRC Reference: EP/P024327/1
Title: Supporting Feature Engineering for End-User Design of Gestural Interactions
Principal Investigator: Fiebrink, Dr R
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Computing Department
Organisation: Goldsmiths College
Scheme: First Grant - Revised 2009
Starts: 26 June 2017 Ends: 25 December 2018 Value (£): 99,992
EPSRC Research Topic Classifications:
Artificial Intelligence Human-Computer Interactions
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
12 Jan 2017 EPSRC ICT Prioritisation Panel Jan 2017 Announced
Summary on Grant Application Form
Sensors for analysing human gesture and activity (such as accelerometers and gyroscopes) are becoming increasingly affordable and easy to connect to existing software and hardware. There is great, unexplored potential for these sensors to support custom gestural control and activity recognition systems. Applications include include the creation of bespoke gestural control interfaces for disabled people, new digital musical instruments, personalised performance analysis systems for athletes, and new embodied interactions for gaming and interactive art. The ability to easily create novel interactions with motion sensors also benefits schoolchildren and university students who are learning about computing through the use of sensors with platforms such as BBC micro:bit and Arduino.

We have previously established methods for enabling people without programming expertise to build custom gesturally-controlled systems, using interactive machine learning. These methods allow people to easily create new systems by demonstrating examples of human actions, along with the desired label or computer response for each action.

Unfortunately, many compelling applications of custom gesture and activity recognition require substantial pre-processing of raw sensor data (i.e., "feature engineering") before machine learning can be applied successfully. Experts first apply a variety of signal processing techniques to sensor data in order to make machine learning feasible. Many people who would benefit from the ability to create custom gestural interactions lack the signal processing and programming expertise to apply those methods effectively or efficiently. It is not known how to successfully expose control over feature engineering to non-experts, nor what the trade-offs among different strategies for exposing control might be.

Our hypothesis is that it is possible to support creation of more complex gestural control and analysis systems by non-experts. We propose to develop and compare three methods for exposing control over feature engineering to non-experts, each requiring varying degrees of user involvement.

The first method uses a fully automated approach, which attempts to computationally identify good features. The second method first elicits high-level information about the problem from the user, and then employs this information to better inform the automated approach. The third method directly involves the user in the feature engineering process. By leveraging users' ability to demonstrate new gestures, identify patterns in visualisations, and reason about the problem domain - as well as computers' ability to employ users' demonstrations to propose new relevant features - this interactive approach may yield more accurate recognisers. Such an approach may also help users learn about the utility of different features, enabling more efficient debugging of their systems and a better understanding of how to build other systems in the future.

We are interested in understanding both the accuracy and usability of these methods. We will evaluate each method with people training several types of gesture and activity recognisers. We will compare each method in terms of the accuracy of the final system, the time required for users to build the system, users' subjective experiences of the design process and quality of the final system, and improvements in users' ability to reason about building gestural interactions with sensors.

This research will enable a wider range of people to successfully use popular motion sensors to create bespoke gestural control and analysis systems, for use in a broader range of applications. Our techniques will be directly implemented in existing open-source software for interactive machine learning. The methods and study outcomes will also inform future work to support feature engineering for users creating real-time systems with other types of sensors.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.gold.ac.uk