My colleague Dom Twose, Millward Brown’s Global Head of Knowledge Management, has just had an interesting and potentially confrontational paper, ‘How not to assess advertising effects’ published in Volume 57 Issue 3 of the International Journal of Market Research. In the paper he critiques the use of a widely used method of detecting advertising effects. The following is a short interview with Twose exploring the ideas put forward in his paper. By Nigel Hollis.
Why did you write this paper?
Rosser Reeves’ book ‘Reality in Advertising’ came out in the ’60s; in it he explained his analysis of looking at ad aware/non ad aware groups, that has since become known as the ‘Rosser Reeves Fallacy’. I suspect people began to criticise it very quickly – it is clearly flawed to anyone who spends five minutes thinking about it. It has been criticised regularly since then; yet still variants of it keep cropping up. I don’t suppose my article will kill it off; but hopefully it will stop a few people using it.
So what exactly is the Fallacy in the Rosser Reeves Fallacy?
The idea behind the analysis is to look at the views of those who say they have seen the advertising (this can be through ad awareness or recognition) and compare them with the views of those who do not remember seeing it, and assume the difference is due to the advertising. The trouble is people are not good at remembering whether they’ve seen an ad, and one of the big influences is that those who are ‘close’ to a brand (through their current usage; through their past usage; through usage by friends and family, etc.) are more likely to claim recognition. So there is likely to be a bias, which is likely to overwhelm any advertising shifts. And it isn’t something that can be stripped out, by excluding users, for example.
Why do you think the RR fallacy continues to be ignored?
Because the comparison does have a superficial appeal. It makes sense to compare the views of those exposed with those not exposed to advertising. It is what we can now do with digital advertising, thanks to cookies, but with digital we match samples on many different characteristics. So the idea is valid but the execution using claimed ad awareness and recognition is flawed. Many researchers stop thinking too soon.
What is the solution? How do you get a clean read?
The problem is, as campaigns become more complex, the research needed to measure it becomes more complex. Millward Brown’s CrossMedia solution is the best way I know of to tease out the effects of different media in a campaign because it uses estimated exposure and models out the influence of pre-existing differences from ongoing advertising effects. However, this additional complexity does add cost. When clients are trying to make efficiencies, this is a problem; and with some of the smaller pieces of activity, the ratio of cost of the research: cost of marketing goes out of kilter.
Is there an alternative, cheaper solution?
Not if you want to measure the effects accurately. But I think that it is valuable to reflect on what we already know about what works in individual media. Millward Brown, with its massive amount of experience over the last 40 years, has a responsibility to share that knowledge, and it’s one of the reasons why I think our series of Knowledge Points is so valuable.
Want to continue this conversation on The Media Online platforms? Comment on Twitter @MediaTMO or on our Facebook page. Send us your suggestions, comments, contributions or tip-offs via e-mail to email@example.com.