Is online research valid? Could it be even more valid than other traditional techniques? Max Kalehoff takes a look.
The answer to that question lies partly in Nate Silver’s analysis in The New York Times of the accuracy of dozens of major polls predicting the outcome of the last presidential election. He put the most popular survey methods under the microscope: live surveys via telephone; live surveys via mobile phone; and online surveys.
This passage sums up the findings: “Among the nine polling firms that conducted their polls wholly or partially online, the average error in calling the election result was 2.1 percentage points. That compares with a 3.5-point error for polling firms that used live telephone interviewers, and 5.0 points for ‘robopolls’ that conducted their surveys by automated script. The traditional telephone polls had a slight Republican bias on the whole, while the robopolls often had a significant Republican bias. The online polls had little overall bias, however.”
How could this happen?
The market research industry has never been fast to innovate. That’s perhaps partly due to its loyalty to what’s “worked” in the past, and partly to a defensive reaction to new and unsanctioned alternatives. Whichever the case, online research pioneers have had to overcome serious scepticism over the past 15 years, and still often do.
Today, we live in a digital world, and online connectedness is the norm, not disruptive access to panel respondents via a telephone. That norm reflected Obama’s supporters, and that’s why they were so underrepresented in recent telephone-based predictions. This analysis is not the first proof of online research validity, but it further forces sceptics to reconsider.
These findings prompt key questions as we look to the future of market research.
First, what becomes of the future of panel selection? The ivory tower of online market research has often hung its hat on random digit dial (RDD) sampling methods, where blocks of telephone numbers are randomly called to create a representative research panel. The findings from this analysis suggest we need to consider the evolution of RDD methods.
Second, what do we actually mean by online polling? Telephone or mail polling means something specific: asking survey questions via those channels. Online is not a single channel, but a platform for many channels. With online, polling could take place via Skype, Facebook, desktop, mobile, browser, app, email, video or audio.
I presume the most popular method today involves email, where existing or prospective panelists are solicited, qualified and presented with a hosted online form. Regardless, the channel matters, as do the questions. (I would also believe that Facebook — with its one billion users, high engagement and rich profile data — holds the most valuable panel and tools with which to poll and estimate outcomes.)
Third, polls and surveys typically rely on self-reported data. Self-reported survey data will always have a place in helping marketers, politicians and academics understand their world and form intelligence. But the advent of online analytics, passive behavioral analysis and (pardon the buzzword) big data is enabling promising new methods for predicting outcomes. How about predictive markets? Some were right on the money.
So here is my prediction: Demographics and psychographics will continue to shift, while new technologies and growing data sources will continue to disrupt how we communicate and observe. A competitive industry will sustain for the purpose of out-predicting everyone else. However, the industry’s leaders in eight years (two presidential election cycles) will probably look radically different than they do today.
This post is republished with the kind permission of MediaPost.com