This article was provoked by a report on The Media Online website on the paper presented by Clare Bowen titled ‘Reappraising radio’s role for advertisers’.
The provocation arose from the number of times that the multiplier effect of additional media (in her case, radio) in a media campaign was mentioned. What intrigued me is that there’s no report quantifying this effect. This has prompted me to put a case for doing exactly that with Ad-Audit.
It emerged from work done by Marketing Science in the ’90s, working with Berry Bush Di Bella, and de Villiers, and their clients. It’s based on four thoughts, or components:
- Advertising research should measure strategy as well as creative.
- To do so, the methodology must be supported by strong theoretical foundations
- A different, and new, measurement system that provides both internal and external validity is necessary.
- The measurement methodology must be both sensitive and reliable.
In my opinion, the work done in the 1990s and more recently, over the past year, has prompted us to suggest that Ad-Audit could well be used in the media planning context as well to address some of the issues that Bowen referred to.
In the introduction to the report, Michael Bratt refers to radio (and I dare say that applies to all but TV), as an afterthought in the media planning and buying process. I’m sure it’s an opinion shared by many, but where is the evidence to support this behaviour?
Strategy and creative
I would argue that a large part of the answer to this question is that it could be because there are no measurement procedures to determine what the strategic value is of supplementing the creative idea in supplementary media. Moreover, why shouldn’t TV be used to supplement creative in ‘minor’ media, if the strategy requires it? Ad-Audit is designed to measure both strategic and executional efficiency for each execution.
Predictive models
Advertising cannot force people to buy, it must throw punches at the mind, using arguments. These arguments ultimately talk to the mind about attributes of the brand. The communication strategy is thus encapsulated in a small (3 to 6) bundle of attributes. Simplistically, strategic efficiency is the extent to which the advertisement hits the spot that strategy determines. The underlying model is that people whose minds buy into the argument(s) will be more likely to take positive action.
In relation to executional efficiency, we’ve known for 20 yeas or more that the attitude to the advertisement predicts the effect it’s likely to have. Hence the importance of ‘Liking’. But, what predicts liking? Just as our strategic predictive model is operationalised into attribute measurement, so Liking needs to be decomposed into a parsimonious set of ideas. We have developed 6 dimensions and measure each of the advertisements on the extent to which it delivers on each of the 6 components. The jolt to my brain that initiated this article were the followings words from Bowen’s discussion; ‘increased levels of happiness while listening’ and ‘helps a brand build an emotive connection’. One of our six components measure exactly this idea that the recipient of advertising should have some fun!
Measurement procedure
Over the years, it has emerged very clearly that Thurstone scales (ratings) are very blunt instruments and are not based on any theory of human thinking. In the ‘90’s more use was made of trade-off measurement procedures like conjoint. These are based on the notion that we can’t have everything and must reveal our choice structures by limiting the extent to which we do with less of one attribute to have more of another, In Ad-Audit, we use binary comparison of options in a specific context, and an affective, as opposed to cognitive, metric of choice. External validity is provided by comparing the calculated strategic importance of each attribute against existing research.
Sensitivity and reliability
Sensitivity is a function of scalability. That is, the more complex the project, campaign, etc, so, the methodology should be scalable and flexible to withstand the challenges. To date, no large campaign has been evaluated using Ad-Audit, but it is theoretically capable of being used far earlier in campaign evolution with three or four replications. Because Ad-Audit uses binary comparisons of a full factorial design, it must be done on a sophisticated system with the respondent interface being a digital device from desktop to mobile. This brings huge speed and cost advantages. By setting up a community, the added advantage of validation, interrogation and co-creation are available.
However the biggest advantage of all, is that the measurement methodology enables us to identify those who are answering inconsistently and base results only on those peoples’ judgments that are logically consistent. This is extremely important and adds dramatically to the internal validity and reliability
To sum up, Ad-Audit harnesses the latest theories on how advertising works, neuroscientific measurement and digital technology to bring a powerful research tool to help strategists, planners and marketers to understand how to create more effective advertising.
Mike W Broom is CEO of Marketing Science & Panel Services Africa