I am unashamedly impressed by the achievements of the Broadcast Research Council (BRC) since its establishment, and the rigour with which it applies the mantra of ‘review, examine and refine’. When I heard that such scrutiny was being applied to the new Socio-Economic Measure (SEM), I was keen to hear the outcome of the review.
A couple of recaps are in order here. Firstly, we need to recall that the original segmentation system was developed, using the first couple of months of Establishment Survey (ES) data, in answer to a specific brief. The second point to recall is that the ES and, indeed, the development of the new measure are products of a BRC and PRC (Publishers Research Council) funded collaboration.
The intention had always been to review and, hopefully, validate the SEM system, once there was a robust dataset. Kantar TNS was tasked with the exercise, based on the full 12 months 2017 dataset of 25 136 respondents. The first step in the review process was to re-run the correspondence analysis on the original 14 variables, on this solid dataset. Once again, this resulted in the “desired horse shoe curve” map, which had made Neil Higgs “dizzy with delight”, when the original analysis was carried out. The variance explained that percentage and inertia scores were virtually identical to the initial results.
Read more: Time to transition to new measures
Not satisfied that this was proof that the model was robust, stable and effective in measuring the local socio-economic continuum, the Kantar TNS team then set about trying to reconstruct the SEM from scratch. Using the same process as in 2016, but looking at all data afresh, with new possible combinations, the team tried to beat the initial results. However, they found they were unable to do so. In fact, more input variables (almost double) were required to explain less differentiation. Once again, this demonstrated the efficacy of the original SEM system.
The Kantar TNS team also explored some variations on some of the existing variables. The first of these was to add ‘type of shelter’ into the variable mix. As this tends to be a good indicator of how people live, it seemed a sensible enhancement. But, this actually had the result of significantly reducing variance explained! The likely explanation is that definition of a ‘formal house’ currently combines ‘free-standing’ and ‘townhouse’, which limits differentiation on this variable. These will now be split out.
The next variable the team explored was the ‘deep freezer’, because it was the only original SEM variable which did not appear in the new system when SEM was re-constructed from scratch. They tried replacing it with another durable, which made a similar contribution to Dimension 1, namely ‘dishwashing machine’. Initially the results of the correspondence analysis looked promising, but closer investigation of the scorecard, distribution of scores and profiles showed it to be an inferior substitute.
The deep freezer variable
The ‘deep freezer’ variable has been the subject of much debate, as many people believed the original description of a “deep freezer which is free standing” to be outdated. After the initial SEM had been developed, a ‘side-by-side fridge and freezer’ question was added to the ES questionnaire as a future proofing measure. Having discovered that there was overlap in the responses for these two variables, the decision was taken to combine both variables in the scoring system, which has necessitated minor adjustments to the scoring system. In short, the feedback was a conclusive endorsement of the original approach.
With many users feeling somewhat uncertain about transitioning to the SEM system, the next task the Kantar TNS team tackled was to find out if there were any natural segments, instead of the 10 bands that were initially launched. They employed a number of intimidating sounding statistical techniques on the SEM data, which did not reveal meaningful cut points. (Yet again proving that SEM is a good continuum).
This necessitated taking a judgmental view, based on profiles, to arrive at the recommendations. These were for a three and a five supergroup solution, based on where changes in lifestyle can be noted from segment profiling. Demographic profile and media reach slides certainly supported this approach.
The transition from LSM to SEM
Of course, the biggest adoption challenge for most users is how to transition from LSM to SEM given that the two different measurement systems are based on different inputs and distribute the population differently. There can be no simple and easy direct comparison, but the Kantar TNS team provided broadly comparable groups, which could provide a loose guide. There was a caveat though: There will always be some overlap. The reality is that data users need to start building their SEM benchmarks, whilst they are running out their LSM commitments.
I left the presentation feeling thoroughly reassured that not only was the SEM system sound, but that sensible steps had been made to facilitate its usage. So, it was with some disbelief that I read the Marketing Research Foundation’s promise that its “vital research” (as yet unfunded) will “ensure that the LSM vs. SEM problem is resolved in a manner acceptable to all players”.
For years, commentators in the media industry pointed to the short-comings of LSMs. Now there is a new concise system, which has been validated, and represents the realities of our unequal country. The matter seems pretty resolved.
It is a fallacy to believe it possible to have LSMs that are the same, but better. Once they are better, they will not bear much resemblance to the previous measurements. A restart would be inevitable anyway, so it seems sensible to do the restart with the improved and tested system. It is understandable people desire continuity, but the media world has changed inexorably since LSMs were first conceived. Sentiment should not prevent marketers and media professionals from moving to a better system.
The MRF and MAPs research
This was not the only cause for concern I saw in the MRF’s slightly overwrought invitation to their update on their intended MAPs research. With a deadline of 15 June to achieve the minimum viable number of subscribers to press the go-ahead on the research, there is a touch of desperation in declaring “… if we fail by 15 June all the effort to date will have come to nought. The black hole will prevail!”
The industry has not fallen into an abyss. Media planners have better currency research across TV, radio, print and outdoor than previously, thanks to the commitment and hard work of the media owners. What media planners do not have is product and brand data – that is “the void left by the demise of AMPS”. AMPS was never the ideal source of “deep insights into consumer behaviour”. If the MRF can “create consumer-centric research to aid the understanding of the customer journey by being able to track and understand daily consumer behaviour, decision making and consumption” that would be a brilliant bonus.
But the brands and products research, and the insights into the consumer journey, need to complement the existing currency data. Sense needs to prevail. This is a small industry and there is limited budget for the necessary research. The vision of a hub model built around an ES, presented by Kuper Research in the 2013 SAARF future-proofing exercise, remains the best template. Rather than any buttons being pushed by the MRF, I would rather see some dispassionate reflection, the stifling of sentimentality and suspicion, and a move to pragmatic rapprochement.
(Note: If you want to catch up on the detail of this feedback, you should attend the BRC RAM May release as it will be presented there. The dates are 28 May in Johannesburg and 30 May in Cape Town).
Having spent some decades working in the media agencies, Britta Reid now relishes the opportunity to take an independent perspective on the South African media world, especially during this time of radical research transformation.