MUMBAI: The Broadcast Audience Research Council (BARC) is all set for 2015, as it will hold roadshows in February on the GUI (Graphical User Interface) in Mumbai, Delhi, Kolkata and Bengaluru.
It was in 2013 when the Council held its first round of roadshows that aimed at sharing the latest updates from BARC with all constituents across the entire broadcast value chain, and, equally important, to receive feedback and suggestions, so that the new television measurement system is completely robust, transparent and representative.
Welcoming the New Year, the council thanked its stakeholders, vendors, partners and associates as well as highlighted its achievements. With more than 275 channels having ordered for embedders, all major networks in each region and across genres are now on-board.
As it continues to reach out to the stakeholders for feedback, the playout monitoring facilities are in action and meta-tagging of content across watermarked channels is in full throttle in Mumbai and Bengaluru.
It has also tested the end-to-end integration of the system, which is working perfectly fine. The technology handshakes are in place and ratings are being generated from the BARC system now.
In continuation to unravel the puzzle of TV audience measurement system in India, BARC India shared a few learnings and insights on the importance of Relative Errors and Confidence Levels in audience measurement for new beginnings.
BARC India and the importance of Relative Error
Over the past few months, BARC India has highlighted its commitment to data robustness and has spoken about lower Relative Errors at high Confidence Levels. It has repeatedly highlighted that Relative Errors are an important factor to be considered whenever it evaluated the ratings data, or read any research report, for that matter.
Relative Error and its impact on research data
It is not possible to sample every individual (except perhaps, a Census); hence, sample surveys are undertaken. Statistics offer scientific methods to estimate phenomena across entire population by studying samples. Any sample survey suffers inherently from various errors. Owing to these, statistics never talk about an average (or mean) without talking simultaneously about a measure of dispersion, usually the standard deviation.
A researcher has to balance between demands of greater accuracy and constraints of finite resources. Statisticians therefore work with defined ‘Confidence Intervals’ and ‘Sampling Errors’. One of these sampling errors is the ‘Relative Error’, or the deviation (in percentage) of the observed value from the actual (expected) value.
Confidence Level (or Confidence Interval)
Confidence Level is generally defined as a percentage or a decimal figure less than one. So, if a researcher says that the Confidence Interval is 90 per cent, what he means is that 90 per cent of the samples of the same size taken from the same population will produce results within a defined range.
Relative Error
A TV ratings measurement system estimates that the programme has 1 TRP with a standard deviation of 0.25. This means that the actual rating is expected to lie between 1-0.25 and 1+0.25 or 0.75 and 1.25. The relative error is simply 0.25/1.0 or 25 per cent.
A simplistic explanation that may antagonise a purist, but can be explained simply in the diagram below:
In other words, it is important for a research to ensure least possible Relative Error at the highest possible Confidence Level; else it risks generating data with such wide variance that it becomes meaningless. Just imagine saying that a programme has 1 TRP at the above Relative Errors.
Factors affecting Relative Error
The most important factor that affects Relative Error is sample size. Relative Error increases in geometric magnitude as sample size decreases, while it becomes independent of sample size beyond a certain threshold.
Sampling is also relatively simpler when estimating a homogenous population and more complex for heterogeneous population. It is hence extremely important to have a significantly large sample size, especially when calculating estimates for large heterogeneous universe.
On how BARC India intends to handle issues related to sample size to ensure robustness of data, the council shares a hypothetical scenario – A planner wishes to evaluate programme viewership for the following TG for a premium brand - males, NCCS AB, 40+ in Delhi
Total Sample Size: 130
Approx. sample size for a programme with a rating of 1 per cent viewers: 13
A sample size of 13 is way too low to do any meaningful evaluation. Hence, BARC India would not encourage such evaluations.
To circumvent this issue, BARC India intends to aggregate the data through one of the following means:
• Aggregate viewership data across two or more weeks
• Add more cities to the sample, aggregating geographically
• Instead of considering a particular individual programme or a limited time, evaluate a day part, thus aggregating by time bands
Each of the above methods would increase the sample size and would allow the planner to make his decision based on robust relevant data. The BARC India Technical Committee is evaluating options of either hardcoding the aggregations in the pre-publishing stage itself, or allowing the planner to decide the aggregation based on his/her requirements. This decision would be taken only after seeing the data for all panel homes and assessing the pros and cons of each method.