
Rebuilding trust in research: A fresh look at data quality
In an Insights Association webinar, industry leaders from Sago, SampleCon, and Data Quality Co-op discussed how transparency, empathy, and shared accountability can rebuild trust in research. They agreed most data is good—the challenge is learning from exceptions, not defining the whole picture. Quality improves when the industry collaborates, measures openly, and treats respondents as people, not data.
In a recent Insights Association webinar, “Rebuilding Trust in Research: A Fresh Look at Data Quality,” industry leaders Rob Berger from Sago, Rachel Alltmont from SampleCon, and Data Quality Co-op’s own CEO and co-Founder Bob Fawson came together to discuss one of the industry’s most pressing and often misunderstood topics: how to strengthen trust in research while tackling today’s data quality challenges.
Shifting the narrative
The session began with an important mindset shift that most data in market research is good. While fraud and inattentive responses make headlines, the majority of respondents are qualified and engaged. “We’ve fallen into a trap of focusing on what’s wrong instead of what’s right,” said Rachel. “We need to stop admiring the data quality problem and move to action.”
Bob echoed that sentiment, noting that “really great insights are being generated all across our ecosystem every day.” The issue, he said, isn’t that research is broken—it’s that the process of producing quality data has become more complex. As Rob put it, “We’re running thousands of surveys a month, and 98% to 99% of what’s delivered is good data. The challenge is learning from the exceptions without letting them define the whole picture.”
Evolution, not collapse
Bob emphasized that the industry is not in crisis—it’s evolving, something he talks about more in a recent letter to the industry. During the webinar, he compared the rise of programmatic sampling to the earlier transition from call centers to online panels, describing it as another major step in the industry’s evolution. “There’s no way to rewind the clock,” he said. “Programmatic and AI-driven sampling have made data more accessible and cost-effective—but also more complex. This is evolution, not brokenness.”
Rachel agreed that change is messy but necessary: “Wonderful things often come out of moments of discomfort. We need to own who we are as an industry and be transparent about how we work.”
Transparency as the foundation of trust
All three panelists agreed that transparency—both with clients and within the supply chain—is the cornerstone of rebuilding trust. “You can’t fix what you can’t measure, and you can’t measure what you can’t see,” said Bob. “The fact that it’s hard to talk about quality is a symptom of the lack of common understanding and shared benchmarks.”
Rob noted that being upfront about where sample comes from, how respondents are recruited, and what incentives are used helps clients make more informed trade-offs. “Ninety-nine times out of a hundred, transparency strengthens the client relationship,” he said. “It allows everyone to improve together.”
Respondents are people, not commodities
The panel also emphasized the human side of the equation. “We’ve forgotten that the people taking our surveys are people,” said Rachel. “They’re our most important customers, and we have to treat them that way.” Poor design, excessive screeners, and redundant quality checks create frustration and risk driving away genuine respondents.
Bob shared that he occasionally takes surveys himself to understand the experience. “I went through four screeners the other day, each with its own CAPTCHA and open-end check. It was a 15-minute exercise just to prove I wasn’t a bot.” He argued that empathy—both for respondents and for partners in the supply chain—must be part of any quality discussion.
Collaboration and shared accountability
A key takeaway from the session was that data quality is a shared responsibility. Rob pointed out that matching the right type of sample to the right type of study is increasingly critical as AI and hybrid methods gain traction. Rachel noted that the industry must get better at sharing what works to strengthen collective knowledge.
The panelists also endorsed ongoing benchmarking efforts by the Insights Association and the Global Data Quality initiative. “We need industry-level feedback loops—something like a credit score for suppliers and respondents,” said Bob. “We can’t improve what we don’t measure, and we can’t measure what we don’t share.”
Listen to the full webinar replay, “Rebuilding Trust in Research: A Fresh Look at Data Quality,” featuring Rob Berger of Sago, Rachel Alltmont of Rep Data and SampleCon, and Bob Fawson of Data Quality Co-op here: https://us02web.zoom.us/webinar/register/WN_-B28Ub2tRqeC_KpKazX6aQ#/registration