
The Data Trust Score™: A fundamentally different approach to data quality
The industry's first comprehensive metric built on ecosystem-level data to measure respondent trust across studies.
Bad input produces bad outcomes. Data quality is often evaluated after the fact, with technical signals, behavioral indicators and participation history reviewed separately using different tools and thresholds. By addressing data quality locally and post hoc, we risk allowing low-quality responses to influence results. The Data Trust Score™ (DTS) is designed to address systemic data quality risks at the beginning of data collection. It brings multiple categories of historical and real-time signals together into a single, consistent metric that reflects how respondents actually perform over time. By drawing on all available signal types across the ecosystem, DTS helps teams move faster with greater confidence in the data they use.
Developed by Data Quality Co-op (DQC), the Data Trust Score is based on observed behavior across the data ecosystem and ranges from 0 to 1,000. Higher scores indicate a stronger, more consistent pattern of reliable participation.
How the Data Trust Score is calculated
The Data Trust Score combines three categories of inputs:
- Technical fraud indicators, such as whether a device has suspicious characteristics;
- In-survey behavior, including response patterns, task timing and open-end quality; and
- Survey participation history, including frequency, outcomes and timing across studies.
Each input contributes information that, on its own, provides only partial context. A device-level fraud score can indicate technical risk but does not capture how someone behaves once they are in a survey. In-survey checks capture engagement within a single project but do not account for repeated patterns across studies. Participation history adds longitudinal context that neither of the other signals provides on its own.
By combining these inputs, the Data Trust Score reflects how respondents actually interact with surveys over time.

Why combined signals improve accuracy
Observed behavior shows that respondents with clean devices can still produce poor-quality data. Examples include consistent speeding, illogical response patterns or repeated low-quality open ends. In practice, this means respondents who appear technically clean but consistently disengage can be identified earlier, before their responses influence results or require post-survey removal. These behaviors affect measures such as purchase intent, preference and stated demand, even when technical fraud checks are passed—especially in situations where teams have limited time, data or context to detect issues before results are finalized.
Modeling conducted by DQC shows that the combined Data Trust Score is more than twice as predictive of quality-related removals as device-level fraud indicators alone. Predictive accuracy improves as respondents are observed across additional projects, allowing the score to adjust based on cumulative evidence rather than a single interaction.
This approach reduces reliance on isolated checks and supports earlier, more consistent quality decisions. The score improves as more participants and projects contribute observed behavior, creating a shared feedback loop that benefits all organizations using the system rather than optimizing quality in isolation.
Using personas to interpret the score
To make the Data Trust Score easy to interpret, each respondent record is associated with a persona based on observed patterns across device history, in-survey behavior and participation history. These personas turn complex quality signals into clear, recognizable patterns and help teams quickly understand whether trust is being gained or lost as a score moves up or down.
The personas are defined as:
- The Gold Standard: Respondents with consistently clean device history and consistently good in-survey behavior, reflecting sustained, high trust over time.
- Newbies: Respondents with a clean or perfect device score but little or no prior survey participation history. Their in-survey behavior is not yet established, resulting in limited behavioral context and a neutral starting trust position.
- Incognito Operators: Respondents associated with consistently suspicious or fraudulent device signals, regardless of in-survey behavior, indicating declining or low trust.
- Keyboard Mashers: Respondents with a history of consistently poor or suspicious in-survey behavior, such as speeding, illogical responses or low-quality open ends, even when device signals appear clean, reflecting low trust.
For respondents with insufficient history to assign a persona, we share whether they are Gaining Trust or Losing Trust based on their observed behavior.
These personas help teams move from abstract quality rules to concrete decisions about who to include, route or exclude in different research contexts, applying the Data Trust Score with greater precision. They support clearer threshold setting, improve communication across internal teams and help explain quality decisions to stakeholders using a shared vocabulary grounded in observed behavior.
Because personas are derived from aggregated, observed behavior across the ecosystem, they support more consistent interpretation of quality across buyers, suppliers and platforms. This shared vocabulary reduces ambiguity, aligns expectations and makes quality outcomes easier to compare and improve over time.

Applying the Data Trust Score in practice
The Data Trust Score can be used to support several operational decisions, including:
- Screening and routing respondents during fieldwork
- Comparing quality performance across suppliers
- Reducing time spent on post-survey data cleaning
- Documenting and communicating quality standards internally and externally
For teams running frequent or high-stakes studies, this shifts quality management earlier in the process, reducing downstream rework and uncertainty.
Because the score is calculated using shared infrastructure, it reflects benchmarks grounded in data from real, engaged participants rather than project-specific assumptions.
Built on shared quality infrastructure
The Data Trust Score operates within a shared quality infrastructure where participants contribute observed quality signals and receive access to stronger benchmarks and longitudinal insight. Participants share quality signals generated through their projects and, in return, receive access to stronger benchmarks, longitudinal insight and continuously improving quality metrics.
As more data flows through the system, the score becomes more informative across the ecosystem. This structure supports continuous quality improvement by making patterns visible and measurable across organizations, rather than requiring each team to solve the same quality questions independently.
Access and availability
The Data Trust Score is available through the DQC Quality Tool and via API. It can be viewed alongside device-level indicators, supplier benchmarks and trend reporting within the DQC dashboard.
These tools provide a practical way to assess respondent trustworthiness, manage quality earlier in the research process and document quality outcomes with a shared, transparent metric. The Data Trust Score provides a shared, measurable foundation for managing respondent quality in a way that scales across teams, suppliers and studies.