The November 2016 BRC RAM (Radio Audience Measurement) results were released recently. Having spent decades in the industry, dutifully attending dry as dust research presentations, I was struck by the energy and zest with which Clare O’Neil, CEO of the BRC, and Gisela Seeley, client service director: media at Kantar TNS presented the data. The double handed format kept one’s attention and even allowed for some humour.
There is an objective in taking this approach, of course, as Clare explained to me, when I caught up with her at the BRC offices a few days after the presentation. One of her missions is to make the BRC audience research feel more “accessible” and “easy to use”, encouraging stakeholders and users of the data to engage more readily with the surveys. (I hope the youngsters in the industry appreciate how fortunate they are in being spared the unremitting dourness of the traditional research releases.)
This desire to make the data accessible and to encourage user engagement also shaped the content of the presentation. After reminding the industry that this release represents the “adolescent” milestone in the building of the first year of data, the next step Clare took was to share the outlines of the approach used to scrutinise the data prior to the release. Educating the broader industry in how the data is rigorously assessed by TNS and the BRC, and the Radio Research Committee, is certainly an important step forward in building user confidence in the data.
Vital signs
The upfront examination of the “vital signs” serves to remind the industry that the sample is building well: almost 54 000 respondents have been interviewed and, most importantly, these have been consistently split across the metro, small urban and rural areas in proportion to the population.
As the data builds, more community stations will meet the benchmark of having a minimum sample of 40 respondents, which is the point at which it is feasible to use the listenership data with some confidence. In this latest release, 17 more community stations met the respondent criteria. Clare reminded me that the intention is to release an annual full year of community station data, to allow this sector to build up the most robust station data possible.
The concept of “four gates” through which the data must pass before being considered sound is a useful tool for the industry to understand. After the “sample gate”, the next challenge is the “weighting gate”, or weighting efficiency measure, which demonstrates how accurately the sample profile reflects the actual proportions in the population. Clare was rightly pleased with the fact that across all three quarters, this has been over 80%. This means that the exercise of weighting is one of balancing the sample, rather than trying to severely wrench an unrepresentative sample into the correct proportions.
Stability gate
The next test is the “stability gate”, and a large part of the presentation focused on showing how the January to September data compared with that of the first six months of the year. This section certainly re-iterated the stability of the data across a variety of perspectives. The overall daily and weekly reach of the medium remains consistent, and, reassuringly, there are no inexplicable anomalies in provincial listening habits. The patterns of loyal, long listening together with a resultant high proportion of heavy listeners are constant across both data sets. Both sets of information on device and location listenership show similar patterns across the first six month period and then for the nine month period.
The daily listening curves across the provinces again underscore the consistency of listening habits tracked by the survey. The remaining detail of the provincial snapshots e.g. daily and weekly reach, together with device and location data, also speak to a stable picture. Key individual station measures such as daily and weekly reach, exclusive listenership and time spent listening also re-enforce the picture of stability.
The last checkpoint, or gate, for the data is that of individual station change, and the trends which will emerge as more surveys are released. This is the actual currency data, which will be regularly released on a quarterly basis, with six month rolling samples. The BRC have also provided neat easy reference station dashboards which show the start of the new trends. Whilst the BRC ensures that the data collection process runs like clockwork through a tight KPI system and that no stone is left unturned in verifying the results, it leaves the interpretation and commercial application of the data to the broadcasters.
Clare and Kantar TNS team certainly presented a strong case for the reliability and consistency of the data.
Britta Reid is an independent media consultant.