Fifty Shades of Statistical Grey

Marker pens in shades of greyAcademics are often accused of living in an ivory tower, pursuing esoteric lines of enquiry with little thought to how their work will be used. Well I can’t speak for my university-based peers, but it’s certainly not true in CERP. As all our findings feed directly into the operational work of AQA, we spend a lot of time trying to make sure that our results are used and interpreted correctly.

This is a lot more complicated than it sounds. I don’t entirely agree with the sentiment “lies, damn lies and statistics” but it certainly holds a lot of truth. The job of a statistician is to take complicated, messy data and turn it into a neat summary. We often think of statistics as objective facts, but interrogating the world is not a simple Q&A exercise: it’s a subtle art.

Generating any sort of statistic requires the researcher to make a number of decisions: what data to collect, who to collect it from, how big a sample is needed, what type of analysis to run, how to deal with missing data… and that’s before you get into the more technical questions. Two researchers investigating the same thing can answer these questions in different ways and come up with different statistics. Neither researcher is necessarily wrong – but in order to use and interpret the statistics appropriately, it’s important to understand how those questions have been answered and how it might affect the conclusions.

As researchers, the subjectivity of statistical analysis is our playground. We think in shades of grey. And when asked a seemingly straightforward question, we often answer in shades of grey. I’ve lost count of the number of times I’ve said “that’s actually quite a complicated question” to my long-suffering friends, family and colleagues.

This partly stems from a nerdy desire to communicate the beautiful complexity of our data. But there’s also an element of fear that important details will be overlooked and our findings will be exaggerated or wrongly interpreted (as highlighted in the blog post Death metal and the English Bacc). This is especially the case when our research, with all its limitations and caveats, could be used to make decisions that impact on people’s lives. As a result, researchers can be particularly frustrating for policy-makers, who need us to navigate them through the grey world into the realm of black and white, where concrete decisions can be made.

We often talk about evidence-based policy, and criticise policy-makers for ignoring research. But communication is a two-way process, and if we struggle to draw black and white conclusions perhaps it is unsurprising if policy-makers struggle to engage with our grey research. We’ve written elsewhere in our blog posts about how to interpret research and the potential pitfalls (Shampoo surveys and intervention research; Bacon butties, hot dogs, and exam grades), but perhaps we should also think about what we can do to make our work more user-friendly.

So, in true researcher style, I’d like to leave you with a question: how should we walk the line between informing policy-makers and ensuring our research is used appropriately? Answers on a postcard please.

Kate Kelly

Share this page