|
|
|
|
|
Trends in Cleaning Relational Data: Consistency and Deduplication
Author(s): Ihab F. Ilyas;Xu Chu
Source: Journal:Foundations and Trends® in Databases ISSN Print:1931-7883, ISSN Online:1931-7891 Publisher:Now Publishers Volume 5 Number 4, Pages: 115(281-393) DOI: 10.1561/1900000045
Abstract:
Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data
analytics results and wrong business decisions. Poor data across businesses and the government cost the U.S. economy $3.1 trillion a year, according to a report
by InsightSquared in 2012. To detect data errors, data quality rules or integrity constraints (ICs) have been proposed as a declarative way to describe legal or
correct data instances. Any subset of data that does not conform to the defined rules is considered erroneous, which is also referred to as a violation. Various
kinds of data repairing techniques with different objectives have been introduced, where algorithms are used to detect subsets of the data that violate the declared
integrity constraints, and even to suggest updates to the database such that the new database instance conforms with these constraints. While some of these algorithms
aim to minimally change the database, others involve human experts or knowledge bases to verify the repairs suggested by the automatic repeating algorithms.
In this paper, we discuss the main facets and directions in designing error detection and repairing techniques. We propose a taxonomy of current anomaly detection
techniques, including error types, the automation of the detection process, and error propagation. We also propose a taxonomy of current data repairing techniques,
including the repair target, the automation of the repair process, and the update model. We conclude by highlighting current trends in “big data” cleaning.
|
|
|
|