EPSRC logo

Details of Grant 

EPSRC Reference: EP/R034303/1
Title: Learning to move as a human: one-shot learning of human motion
Principal Investigator: Álvarez López, Dr M A
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Computer Science
Organisation: University of Sheffield
Scheme: New Investigator Award
Starts: 01 August 2018 Ends: 31 July 2020 Value (£): 219,757
EPSRC Research Topic Classifications:
Artificial Intelligence
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
11 Jan 2018 EPSRC ICT Prioritisation Panel Jan 2018 Announced
Summary on Grant Application Form
Computational models for human motion analysis and synthesis have applications in fields as diverse as healthcare, computer graphics, and robotics. In healthcare, analysis of human movements can be used, for example, for tracking motor decline in the elderly. In computer graphics, human motion analysis can be used for human pose tracking from a single camera when measurements might be noisy or missing due to occlusion. In robotics, human motion analysis and synthesis can be used for teaching robots new skills by imitating demonstrations of a human, reducing the effort required to program an industrial robot or a service robot.

One approach to understand how humans move consists of collecting examples of a particular human activity and designing a machine learning model that extracts patterns from those examples. The more examples we collect, the more likely it is for the model to find common features in the data that can be exploited for solving predictive tasks. However, in different applications that require human motion analysis and synthesis, particularly in robot programming by demonstration, collecting many examples is expensive and time-consuming. I.e. we would like a robot to learn a new skill with as few demonstrations as possible, more like a human does. Indeed, humans learn efficiently by imitation with just one or few examples, which is further validated by their ability to generate new examples or creating abstract motions that were not previously seen in the examples that were used to imitate.

In this project, our objective is to develop a data-efficient machine learning model for human motion using the cognitive science concept of one-shot learning.

In cognitive science, one-shot learning (OL) refers to the idea of building intelligent agents using one or few examples. Successful illustrations of the use of this concept for building data efficient models include OL models for generating speech concepts and handwritten characters with human-like appearance. Recent research in cognitive science suggests that humans achieve OL through the combination of three core principles applied to primitive concepts: causality, compositionality, and "learning to learn". It also claims that these ingredients could play an active role in producing machine learning models that replicate human intelligence.

We will achieve our objective through the two key novelties of this proposal: (i) a generic methodology that simultaneously combines causality, compositionality and learning to learn of motor primitives and (ii) a particular instantiation that uses physics-inspired Gaussian process (GP) representations of such motor primitives.

With respect to (i), although there are machine learning models that incorporate some of the ingredients of OL, their simultaneous combination to build data-efficient models for human motion analysis and synthesis has not been proposed yet. With respect to (ii), our GP representation of a motor primitive uses a physics-inspired covariance function with two features: the efficient use of data due to its non-parametric nature; and the inclusion of the principle of causality of OL, providing a generative mechanism for trajectory data. Compositionality of these GP motor primitives will be approached using ideas from formal language theory, in particular, hidden Markov models with explicit state durations. Learning to learn will be accomplished by providing hierarchies of such hidden Markov models.

In order to use the model in practice, we will provide a statistical inference framework for fitting the parameters of the OL model to given data, and for computing probability distributions for prediction. We will test the performance of the OL model for different tasks related to motion capture data, and for imitation learning using kinesthetic demonstrations from anthropomorphic robots. Our results will be fully reproducible and our software to be released as open source.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.shef.ac.uk