
Case Study: Measuring the incremental value of the Data Trust Score™
How the Data Trust Score identified 2.7x more quality risk than device-based checks alone.
Background
As part of the validation process for the Data Trust Score™, Data Quality Co-op analyzed respondent-level outcomes across a large, multi-client dataset to compare different approaches to identify quality risk. The goal was to understand whether combining technical and behavioral signals materially changes which respondents are flagged, and by how much.
Approach
This analysis was conducted on a dataset of approximately 56,000 respondents who were evaluated by multiple data quality assurance tools. Each transaction was evaluated using:
- 3rd-party device fraud scoring
- Device fraud scoring using the DQC Quality Tool
- The Data Trust Score
All tools were applied to the same underlying transactions to ensure comparability.
Key Findings
When applied independently, 3rd-party device-based fraud detection flagged 5.2% of respondents, while the DQC Quality Tool flagged 3.5% of respondents for device fraud. On average, these approaches identified just under 4.5% of respondents as fraudulent.
When the Data Trust Score was applied to the same dataset, 11.6% of respondents were flagged as failing the trust threshold. This represents a 2.7x increase compared with the average results produced by the individual approaches.
The incremental lift observed with the Data Trust Score reflects respondents who would not be identified by device-level signals alone but show consistent patterns of low-quality behavior across studies. These respondents often pass technical checks while exhibiting behaviors such as illogical response patterns or repeated open-end failures over time.
Because the Data Trust Score incorporates participation history and in-survey behavior alongside technical risk, it captures patterns that are not visible when signals are evaluated in isolation.
Implications
This validation demonstrates that the Data Trust Score identifies a materially broader set of quality risks than device-based or single-study behavioral checks alone. It supports earlier intervention, reduces reliance on post-survey cleaning and provides a clearer, more consistent basis for quality decisions across clients and studies.