It seems that Saarf (South African Research Foundation) released the March 2016 RAMS (Radio Audience Measurement Survey) data fairly quietly, without the customary industry presentations. It simply posted the release report on its website.
As this is the last radio survey that will be produced under the auspices of Saarf, one might have expected more ceremony; however, there may have been good reasons for this decision. The release date coincided with the plethora of March public holidays (and school breaks), so many members of the industry would have been on leave. With Saarf in the process of re-visioning its role and future, any possible cost-saving exercises would be prudent. Then, of course, there is also the long shadow that has been cast by the disputed November RAMS release.
Readers may remember that the November RAMS data showed some significant and distinctly disturbing audience losses in three provinces, when compared with the September 2015 reports. Specifically, the past 7-day audience declined from 71.0% to 66.1%, in the Western Cape, from 80.3% to 77.1% in the Eastern Cape and from 89.7% to 81% in Limpopo.
Unsurprisingly, the broadcasters reacted with disbelief at these results. Nielsen insisted the data reflected real listenership losses, and that these were not the consequence of any oddity in the execution of the research. The release of the data was delayed.
The situation was an extra-ordinary one. Whilst the broadcasters were significant funders of RAMS 2015, they were no longer members of Saarf, the organisation that had contracted Nielsen to run the survey. The Broadcast Research Council (BRC) disputed the data, raising a substantial list of possible causes of the anomalies, but ultimately it was up to Saarf to decide whether the data was valid and was to be released.
Summary of issues
This was done in mid December – the release notice stated, “some of these changes may be attributed to the introduction of a fresh rural sample, covering January to June 2015 fieldwork. Similar changes have been seen in previous RAMS releases where new rural field periods are brought in for the first time… The data has now been fully validated and cleared for release on the 15 December 2015”.
The March 2016 release presentation bravely revisits a summary of the issues around the November data. The investigations were nothing if not diligent. A report of over 150 pages was prepared by Nielsen in response to the BRC queries. This has also been posted on the Saarf website, and is a useful supplement to the summary.
All sampling methodology and fieldwork processes were double-checked by the local Nielsen RAMS team. Issues such as the likely impact of field worker churn, the day and time of diary placement and respondent substitution, were all carefully considered. Nielsen also conducted a telephonic back-check with a random sample of diary-keepers in the affected areas, who had claimed not to listen to the radio during the Diary week; 99% confirmed that they had filled in the diary and had not listened during the relevant fieldwork week.
The big guns, the Nielsen Global Data Science Team, based in US, were brought in to conduct independent verification checks. The Lead Data Scientist concluded that the team did “not observe any evidence to indicate that the Wave 3 radio audience estimates were inaccurate”.
In defence of the results, Nielsen demonstrated that total 7-day and Mon-Fri incidence levels were comparable to the 15 September release. Saturday was significantly down, counterbalanced by growth on a Sunday. It argued that the impact was two-directional: while urban listening grew, rural listening declined. Furthermore, it argued that 1⁄4 Hour Listening levels were “very comparable in total, and for most provinces. Limpopo day-curves were lower across the day, but maintained the same “day shape”. In addition, the number of stations per week and demographic profiles of listeners were “very comparable”, across all provinces.
Ultimately the explanation, given by Nielsen in the March release summary, is a combination of factors. Seasonality, or perhaps, timing played a role. The rural sample introduced in the November report covered Jan-June 2015. (The previous rural sample ran over July-Dec 2014.)
Another factor, and one that Nielsen had proffered at the initial presentation of the data, was that it had been an atypically eventful fieldwork period, both politically and weather-related, in all three provinces. Thus, it argued, listening patterns were disrupted and altered, with some degree of complete radio switch-off occurring. A number of large, high loyalty stations declined, resulting in a smaller pool of listeners.
To my mind, the “atypical” events in the fieldwork period could arguably drive more listening, as people try to keep abreast of the events in the area. (Of course, days of dedicated firefighting or fervid/zealous activism, for example, could disrupt one’s viewing habits; but for every active firefighter or protester in an area, there must have been a number of citizens concerned about the events and keen to hear updates.)
Another factor would appear to be the disjunction between the urban fieldwork, which ran from April to September, and the small urban/rural fieldwork, which covered January to June 15.
In the one of the appendices to Nielsen’s response to the BRC question, there is another factor not highlighted in the March release summary. This concerns the fact that the “rolling of data” needs to be carefully controlled.
As the Lead Data Scientist wrote, with predictable diplomacy: “However, if there were a real listening change in the rural population, then Wave 2 would not detect such a change due to the re-use of sample from the prior W1 wave. Thus, the sudden decrease occurring in the last wave can be explained by the proportion of rolled sample. Indeed, in Limpopo for example, 92% of the sample is new during the last wave. Consequently the ratings are less averaged. A possible area of improvement would be to keep the proportion of rolled sample as close as possible to 50% in order to preserve stability over time.”
This particular learning, together with the potential issues around the combining of asynchronous data, should not be forgotten as the industry moves to the new radio measurement survey, which the BRC will introduce in the latter half of 2016.
Britta Reid is an independent media consultant.
Want to continue this conversation on The Media Online platforms? Comment on Twitter @MediaTMO or on our Facebook page. Send us your suggestions, comments, contributions or tip-offs via e-mail to firstname.lastname@example.org.