In a recent Nature article Bill Sutherland et al. provided “twenty tips for interpreting scientific claims”. This was essentially a twenty point check list to allow policy-makers understand and interpret peer-reviewed scientific evidence. With the rationale that in an age of evidence based policy the “immediate priority is to improve policy-makers’ understanding of the imperfect nature of science”. While I would argue that increasing the scientific literacy of policy-makers is never a bad thing (and putting aside Jahi Chappell’s recent insightful comment on whether policy-makers is the correct constituency for scientists to engage with) there are a number of things about this article I found problematic.
Firstly, Sutherland et al.’s article places undue responsibility on policy-makers developing the skills to interpret science, rather on sciencists developing the skills to communicate with policy-makers. Scientists, not policy-makers, must shoulder the responsibility for evaluating the bias, limitations and uncertainties within empirical research. However, most journals calling for “policy relevant” and “problem oriented” research offer only limited space for detailed and frank discussions of the limitations of research findings that are accessible to non-scientists. In fact the Sutherland article is a perfect example of this “speak to non-scientists, but don’t waste space explaining things that good scientists already know” phenomena. One of Sutherland et al.’s tips is that “Bigger is usually better for sample size”, no where was the caveat (usually) explained, for example, with a detailed discussion of the fallacy of classical inference. If we wish to speak to non-scientists the lack of space to do so, carefully and at length is problematic. In the example above it would be easy for a non-scientist to infer they should always be suspicious of small sample sizes (regardless of the strength of the treatment effect).
Secondly, Sutherland et al. focus almost exclusively on quantitative data and statistical analyses. This implies (unintentionally I believe) that quantitative data is the source of scientific claims and scientific evidence. Scientific evidence is not and should not be restricted to quantitative data; both scientists and policy-makers need to value, and be able to evaluate, qualitative evidence arising from scientific enquires. There is a very nice paper in Oryx by William Adams and Chris Sandbrook discussing these issues.
Thirdly, and for me most critically, problem oriented research is fundamentally a normative endeavour, related to “how the world should be”. In this context, empirical evidence describing “how the world is” is driven by the initial framing of the problem to be addressed. Problem framing defines the factors included in empirical analyses, how evidence is presented, and to some extent shapes the possible interpretations empirical research. Evidence based policy debates are therefore strongly influenced by the initial problem framing process. Classic examples of this problem framing issue include the “sustainable intensification” and agricultural “yield gaps” framings that are becoming a dominant discourse in the conservation literature. One could equally talk about “sustainable agricultural consumption” and “yield excesses” and this would, I believe, lead to very different discourses and potential policy interventions. The interpretation of empirical evidence should come after a critical analysis of the problem framing and the scientific and policy discourse that this evidence is rooted in. It is vital that scientists explicitly acknowledge that empirical research cannot be entirely objective, that it is inherently bound to a particular world view and scientific discourse. Non-scientists need to be made aware of these framing issues when seeking to understand and interpret scientific evidence.