This paper describes an approach for automated ingestion of biomedical data dictionaries. Automated ingestion or reading is the process of extracting element details for each of the data elements from a data dictionary in a document format (such as PDF) to a completely structured format. The structured format is essential if the data dictionary metadata is to be used in applications such as data integration, and also in evaluating the quality of the associated data. We present a machine-learning classification solution to the problem using conditional random field (CRF) classifiers and leveraging multiple text and character based features of text rows in the document. We present an evaluation using several actual data dictionary documents demonstrating the effectiveness of our approach.
The paper discusses challenges related to design of a framework for real-time, adaptive, cost-effective collection of high-quality data for critical infrastructure and emergence management. A key objective of the framework is to provide the ability to adaptively collect data based on: capabilities of available data collection technologies; communication capabilities; temporal deadlines; required classification/prediction accuracy; relevant data quality requirements.
The paper outlines the problem of quality for access control policies. It then discusses a few research directions.
Data from Twitter have been increasingly employed to study the impact of events. Conventionally, researchers have relied on keywords to create a panel of Twitter users who mention event-related keywords during and after an event. There are limitations to the keyword-based approach. First, the technique suffers from selection bias since users who discuss an event are already more interested in event-related topics beforehand; it is thus unclear whether observed impacts are merely driven by a set of users who are intrinsically more interested in an event. Second, there are no viable groups for comparison to a keyword-based sample of Twitter users. We propose an alternative sampling approachgeolocated panels defined by users geolocation to studying response to events on Twitter. Geolocated panels are exogenous to the keywords in users tweets, resulting in less selection bias than the keyword-based approach. Geolocated panels allow us to follow within-person changes over time and enable the creation of comparison groups. We evaluate our panel selection approach in two real-world settings: response to mass shootings and response to TV advertising. We first empirically show that geolocated panels are subject to selection biases, while geolocated panels reduce selection biases. Then we show how geolocated panels can provide qualitatively different results. We believe that we are the first to provide a clear empirical example of how a better panel-selection design, based on an exogenous variable such as geography, both reduces selection bias compared to the current state-of-the-art and increases the value of Twitter research for studying events.
During data pre-processing, analysts spend a significant part of their time and effort profiling the quality of the data along with cleansing and transforming the data for further analysis. While quality metricsranging from general to domain-specific measuresfacilitate assessment of the quality of a dataset, there are hardly any approaches to visually support the analyst in customizing and applying such metrics. Yet, visual approaches could facilitate user involvement for data quality assessment. We present MetricDoc, an interactive environment for assessing data quality that provides customizable, reusable quality metrics in combination with immediate visual feedback. Moreover, we provide an overview visualization of these quality metrics along with error visualizations that facilitate interactive navigation of the data to determine the causes of quality issues present in the data. In this paper we describe the architecture, design, and evaluation of MetricDoc which underwent several design cycles, including heuristic evaluation and expert reviews as well as a focus group with data quality, human-computer interaction, and visual analytics experts.
Healthcare organizations increasingly rely on electronic information to optimize their operations. Information of high diversity from various sources accentuate the relevance and importance of information quality (IQ). The quality of information needs to be improved to support a more efficient and reliable utilization of healthcare information systems (IS). This can only be achieved through the implementation of initiatives followed by most users across an organization. The purpose of this study is to examine how awareness of IS users about IQ issues would affect their actual practices toward IQ initiatives. Influenced by the awareness on beneficial and problematic situations generated by IQ practices, users motivation is found to influence their IQ-related behavior. In addition, social influences and facilitating conditions moderate the relationship between user intention and actual practice. The theoretical and practical implications of findings are discussed, especially IQ best practices in the healthcare settings.