EPSRC logo

Details of Grant 

EPSRC Reference: EP/S003800/1
Title: Efficient Querying of Inconsistent Data
Principal Investigator: Pieris, Dr A
Other Investigators:
Libkin, Professor L
Researcher Co-Investigators:
Project Partners:
Meltwater & Wrapidity Ltd.
Department: Sch of Informatics
Organisation: University of Edinburgh
Scheme: Standard Research
Starts: 01 July 2018 Ends: 30 June 2022 Value (£): 606,439
EPSRC Research Topic Classifications:
Information & Knowledge Mgmt
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
02 May 2018 EPSRC ICT Prioritisation Panel May 2018 Announced
Summary on Grant Application Form
Data is everywhere, produced by a multitude of applications, devices, and users. The utilization of Big Data is of paramount importance to many activities in domains that include science, industry, governments, and healthcare. The economic impact of Big Data is projected to exceed £62B by 2020. However, in this data-driven world, managing the data relevant to a certain application in a way we have become accustomed to, namely keeping the data in a well-maintained database or data warehouse, is no longer tenable. This is because data typically comes from continuously evolving, heterogeneous, and unreliable sources. This creates a notable gap between the opportunity provided by the availability of Big Data and the capability of users to make the most out of this opportunity, and calls for the creation of new tools and techniques. The key challenges posed by Big Data are usually summarised as the four V's of Big Data: Volume (scale of the data), Velocity (speed of change), Variety (different forms of data), and Veracity (inconsistency and incompleteness).

The proposed research addresses the Veracity, and more precisely, the inconsistency aspect of Big Data. Inconsistency refers to the fact that a database does not conform to its specification. For example, in a database that stores registered companies, each company must have a unique registration number (this is known as a key constraint). However, in an environment where data arrives from multiple sources, we can have two disagreeing sources that associate different companies with the same registration number. If both facts are stored in an integrated database, that makes it inconsistent.

The problem of querying very large, and at the same time inconsistent data, has been recognised as a common challenge in Big Data analysis that must be urgently addressed. While this problem has attracted considerable attention by the database community, we do not yet have good solutions that come with theoretical guarantees and an be implemented in practical systems.

The most common approach to the problem is known as Consistent Query Answering or CQA. Its idea is to find answers to queries that are consistent, i.e., true no matter how we resolve inconsistencies. Resolving inconsistencies means repairing the data to make it consistent with the specification. Since this elegant idea was introduced about 20 years ago, various notions of repairs have been proposed, all essentially asking for some sort of "minimal change" condition with respect to the inconsistent database.

Much of the research on querying inconsistent data focused on isolating convenient scenarios where the problem can be solved efficiently, but still many realistic scenarios remain beyond reach. This is reflected by CQA's limited practical applicability.

The goal of this project is to change this state of affairs by proposing a practically applicable approach to the problem of querying inconsistent data. We are convinced that the ultimate goal of this new approach should be efficient approximation algorithms that quickly deliver sufficiently good consistent answers with explicit error guarantees.

To achieve this, we need to rethink the very notion of what it means to repair an inconsistent database. Our key argument is that such repairs should be viewed operationally, as a sequence of operations that take the database closer to a consistent state. Viewing repairs in this way, opens up many new opportunities to design approximation algorithms for consistent query answering.

The proposed research programme is structured around three main themes: foundational study of the operational approach to consistent query answering and its complexity; design and analysis of efficient approximation algorithms for finding consistent query answers; and implementation and evaluation of these algorithms in a realistic setting, done with our project partner, a leading vendor of business intelligence software.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.ed.ac.uk