ACM DL

ACM Journal of

Data and Information Quality (JDIQ)

Menu
Latest Articles

The Challenge of “Quick and Dirty” Information Quality

Data Quality Challenges in Distributed Live-Virtual-Constructive Test Environments

Information Quality Research Challenge: Information Quality in the Age of Ubiquitous Digital Intermediation

As information technology becomes an integral part of daily life, increasingly, people understand... (more)

Data Standards Challenges for Interoperable and Quality Data

Challenges for Context-Driven Time Series Forecasting

Predicting time series is a crucial task for organizations, since decisions are often based on uncertain information. Many forecasting models are... (more)

Combining User Reputation and Provenance Analysis for Trust Assessment

Trust is a broad concept that in many systems is often reduced to user reputation alone. However, user reputation is just one way to determine trust.... (more)

Automatic Discovery of Abnormal Values in Large Textual Databases

Textual databases are ubiquitous in many application domains. Examples of textual data range from names and addresses of customers to social media... (more)

EXPERIENCE: Succeeding at Data Management—BigCo Attempts to Leverage Data

In a manner similar to most organizations, BigCompany (BigCo) was determined to benefit strategically from its widely recognized and vast quantities... (more)

NEWS

Jan. 2016 -- New book announcement

 

Carlo Batini and Monica Scannapieco have a new book:

Data and Information Quality: Dimensions, Principles and Techniques  

Springer Series: Data-Centric Systems and Applications, soon available from the Springer shop

The Springer flyer is available here


Special issue on Web Data Quality

The goal of this special issue is to present innovative research in the areas of Web Data Quality Assessment and Web Data Cleansing. The editors of this special issue are Christian Bizer, Xin Luna Dong, Ihab Ilyas, and Maria-Esther Vidal. See the call for papers for more details.

 

 

New options for ACM authors to manage rights and permissions for their work

ACM introduces a new publishing license agreement, an updated copyright transfer agreement, and a new author-pays option which allows for perpetual open access through the ACM Digital Library. For more information, visit the ACM Author Rights webpage.

 

ICIQ 2015, the International Conference on Information Quality, will take place on July 24 in Cambrigde, MA at the MIT.

Experience and Challenge papers: JDIQ now accepts two new types of papers. Experience papers describe real-world applications, datasets and other experiences in handling poor quality data. Challenges papers briefly describe a novel problem or challenge for the IQ community. See calls for papers for details.

Special Issue on Provenance and Quality of Data and Information: The term provenance refers broadly to information about the origin, context, derivation, lineage, ownership or history of some artifact. The provenance of data is more specifically a form of structured metadata that records the activities involved in data production. The notion applies to a broad variety of data types, from database records, to scientific datasets, business transaction logs, web pages, social media messages, and more. At the same time, different definitions and measures of quality apply to each of these data types, in different domains.

The JDIQ guest editors are Paolo Missier (Newcastle University, UK, paolo.missier@ncl.ac.uk) and Paolo Papotti (Qatar Computing Research Institut, Qatar, ppapotti@qf.org.qa).

Forthcoming Articles
EXPERIENCE: GLITCHES IN DATABASES, HOW TO ENSURE DATA QUALITY BY A COMBINED APPROACH

Data are a strategic asset in every organization. The quality of data can make a difference in a number of scenarios, for example, using data mining techniques to gain market share, effectively managing customer relationships, providing the proper service to citizens, or applying econometric tools on reliable data to provide program evaluation and inform policy makers on a range of decisions. Moreover, taking into account the semantics of data to discover and correct errors is too expensive and complex for a database that primarily contains information pertaining to different domains (e.g., customers personal data, prospective projects, billing, and accounting). This paper proposes the application of a new method to analyse the quality of datasets stored in the tables of a database with no knowledge of the semantics of the data and without the need to define repositories of rules. The proposed method is based on proper revisions of different approaches that are combined to boost overall performance and accuracy. A novel transformation algorithm is conceived that treats the items of database tables as data points in real coordinate space of n dimensions. The application of the method to a set of archives, some of which have been studied extensively in the literature, provides very promising experimental results and outperforms the single application of other techniques. Finally, a list of future research directions is highlighted.

Replacing Mechanical Turkers? How to Evaluate Learning Results with Semantic Properties

Some machine learning algorithms offer more than just superior predictive power. They often generate additional information about the dataset upon which they were trained, providing additional insight into the underlying data. Examples of these algorithms are topic modeling algorithms such as Latent Dirichlet Allocation (LDA)~\cite{blei2003latent}, whose topics are often inspected as part of the analysis that many researchers do on their data. Recently deep learning algorithms such as word embedding algorithms like Word2Vec~\cite{mikolov2013distributed} have produced models with semantic properties. These algorithms are immensely useful; they tell us something about the environment from which they generate their predictions. One pressing challenge is how to evaluate the quality of the information produced by these algorithms. This evaluation (if done at all) is usually carried out via user studies. In the context of LDA topics, researchers ask human subjects questions and seeing how they understand different aspects of the topics~\cite{chang2009reading}. While this type of evaluation is sound, it is expensive both from the perspective of time and cost, and thus cannot be easily reproduced independently. These experiments have the additional drawback of being hard to scale up and difficult to generalize. We would like to pose this challenging question of evaluating the information quality of these semantic properties - could we find automatic methods of evaluating information quality as easily as we evaluate predictive power using accuracy, precision, and recall?

Preserving Patient Privacy When Sharing Same-Disease Data

Medical and health data are often collected for studying a specific disease. For such same-disease microdata, a privacy disclosure occurs as long as an individual is known to be in the microdata. Individuals in the same-disease microdata are thus subject to higher disclosure risk than those in microdata with different diseases. This important problem has been overlooked in data privacy research and practice and no prior study has addressed this problem. In this study, we analyze the disclosure risk for the individuals in the same-disease microdata and propose a new metric that is appropriate for measuring disclosure risk in this situation. An efficient algorithm is designed and implemented for anonymizing the same-disease data to minimize the disclosure risk while keeping data utility as good as possible. An experimental study was conducted on real patient and population data. Experimental results show that traditional re-identification risk measures underestimate the actual disclosure risk for the individuals in the same-disease microdata and demonstrate that the proposed approach is very effective in reducing the actual risk for the same-disease data. This study suggests that privacy protection policy and practice for sharing medical and health data should consider not only the individuals identifying attributes but also the health and disease information contained in the data. It is recommended that data-sharing entities should employ a statistical approach, instead of the HIPAAs Safe Harbor policy, when sharing the same-disease microdata.

Challenges in ontology evaluation

Ontologies provide the semantics often as middleware for a number of Artificial Intelligence tools, and can be used to make logical assertions. Ontologies can define objects and the relationships among them in any domain-specific system. Finding logic errors in complete ontologies proves largely impossible for even the most widely-used reasoners. And logic is just one of numerous ways in which an ontology might be assessed. Therefore, we suggest that ontology evaluation is of limited value. Instead, we argue that the logical connections within ontologies should be tested while in development by tools such as Scenario-based Ontology Evaluation (SCONE). We would change present tools such that domain experts are able to make changes in the ontology without knowing ontology languages or description logic. And that ontology-based systems could allow fuzzy matching based on ontologies that might be imperfect.

EXPERIENCE: GLITCHES IN DATABASES, HOW TO ENSURE DATA QUALITY BY A COMBINED APPROACH

Data is a strategic asset in every organization. The quality of data owned can make difference in various scenarios: using data mining techniques to gain market share, managing effectively the customer relationship, providing the proper service to citizens. Moreover taking into account the semantic of data in order to discover and correct errors is too expensive and complex mainly in a database containing information that pertains to different domains (e.g. customers personal data, prospective projects, billing, accounting...). Here is proposed a method aiming at analyzing the quality of data sets stored in tables of a database without knowing the semantic of the data itself and without the need of defining any repositories of rules. The proposed method is based on well known approaches analyzed in literature, suitably revised in order to be combined to boost the overall performance and accuracy. The application of the method to a set of archives, some of which have been studied extensively in literature, provides very promising experimental results.

Unifying Data and Constraint Repairs

Integrity constraints play an important role in data design. However, in an operational database, they may not be enforced for many reasons. Hence, over time, data may become inconsistent with respect to the constraints. To manage this, several approaches have proposed techniques to repair the data, by finding minimal or lowest cost changes to the data that make it consistent with the constraints. Such techniques are appropriate for the old world where data changes, but schemas and their constraints remain fixed. In many modern applications however, constraints may evolve over time as application or business rules change, as data is integrated with new data sources, or as the underlying semantics of the data evolves. In such settings, when an inconsistency occurs, it is no longer clear if there is an error in the data (and the data should be repaired), or if the constraints have evolved (and the constraints should be repaired). In this work, we present a novel unified cost model that allows data and constraint repairs to be compared on an equal footing. We consider repairs over a database that is inconsistent with respect to a set of rules, modeled as functional dependencies (FDs). FDs are the most common type of constraint, and are known to play an important role in maintaining data quality. We evaluate the quality and scalability of our repair algorithms over synthetic data and present a qualitative case study using a well-known real dataset. The results show that our repair algorithms not only scale well for large datasets, but are able to accurately capture and correct inconsistencies, and accurately decide when a data repair versus a constraint repair is best.

Veracity of Big Data: Challenges of Cross-modal Truth Discovery

In this challenge paper, we argue that the next generation of data management and data sharing systems need to manage not only volume and variety of Big Data but most importantly veracity of data. Designing truth discovery systems requires a fundamental paradigm shift in data management and goes beyond adding new layers of data fusion heuristics or developing yet another probabilistic graphical truth discovery model. Actionable and Web-scale truth discovery requires a transdisciplinary approach to incorporate the dynamic and cross-modal dimension related to multi-layered networks of contents and sources.

The Challenge of Improving Credibility of User-Generated Content in Online Social Networks

In every environment of information exchange, Information Quality (IQ) is considered as one of the most important issues. Studies in Online Social Networks (OSNs) analyze a number of related subjects that span both theoretical and practical aspects, from data quality identification and simple attribute classification to quality assessment models for various social environments. Among several factors that affect information quality in online social networks is the credibility of user-generated content. To address this challenge, some proposed solutions include community-based evaluation and labeling of user-generated content in terms of accuracy, clarity and timeliness, along with well-established real-time data mining techniques.

All ACM Journals | See Full Journal Index