Evidence based policy and the problem of problem framing.

In a recent Nature article Bill Sutherland et al. provided “twenty tips for interpreting scientific claims”. This was essentially a twenty point check list to allow policy-makers understand and interpret peer-reviewed scientific evidence. With the rationale that in an age of evidence based policy the “immediate priority is to improve policy-makers’ understanding of the imperfect nature of science”. While I would argue that increasing the scientific literacy of policy-makers is never a bad thing (and putting aside Jahi Chappell’s recent insightful comment on whether policy-makers is the correct constituency for scientists to engage with) there are a number of things about this article I found problematic.

Firstly, Sutherland et al.’s article places undue responsibility on policy-makers developing the skills to interpret science, rather on sciencists developing the skills to communicate with policy-makers. Scientists, not policy-makers, must shoulder the responsibility for evaluating the bias, limitations and uncertainties within empirical research. However, most journals calling for “policy relevant” and “problem oriented” research offer only limited space for detailed and frank discussions of the limitations of research findings that are accessible to non-scientists. In fact the Sutherland article is a perfect example of this “speak to non-scientists, but don’t waste space explaining things that good scientists already know” phenomena. One of Sutherland et al.’s tips is that “Bigger is usually better for sample size”, no where was the caveat (usually) explained, for example, with a detailed discussion of the fallacy of classical inference. If we wish to speak to non-scientists the lack of space to do so, carefully and at length is problematic. In the example above it would be easy for a non-scientist to infer they should always be suspicious of small sample sizes (regardless of the strength of the treatment effect).

Secondly, Sutherland et al. focus almost exclusively on quantitative data and statistical analyses. This implies (unintentionally I believe) that quantitative data is the source of scientific claims and scientific evidence. Scientific evidence is not and should not be restricted to quantitative data; both scientists and policy-makers need to value, and be able to evaluate, qualitative evidence arising from scientific enquires.  There is a very nice paper in Oryx by William Adams and Chris Sandbrook discussing these issues.

Thirdly, and for me most critically, problem oriented research is fundamentally a normative endeavour, related to “how the world should be”. In this context, empirical evidence describing “how the world is” is driven by the initial framing of the problem to be addressed. Problem framing defines the factors included in empirical analyses, how evidence is presented, and to some extent shapes the possible interpretations empirical research. Evidence based policy debates are therefore strongly influenced by the initial problem framing process. Classic examples of this problem framing issue include the “sustainable intensification” and agricultural “yield gaps” framings that are becoming a dominant discourse in the conservation literature. One could equally talk about “sustainable agricultural consumption” and “yield excesses” and this would, I believe, lead to very different discourses and potential policy interventions. The interpretation of empirical evidence should come after a critical analysis of the problem framing and the scientific and policy discourse that this evidence is rooted in. It is vital that scientists explicitly acknowledge that empirical research cannot be entirely objective, that it is inherently bound to a particular world view and scientific discourse. Non-scientists need to be made aware of these framing issues when seeking to understand and interpret scientific evidence.  

Advertisements

3 thoughts on “Evidence based policy and the problem of problem framing.

  1. Hi Dave, I agree entirely. Imagine going to a meeting with policy makers and pronouncing, ‘Beware the base-rate fallacy’, ‘Regression to the mean can mislead’ and ‘Seek replication, not pseudoreplication’. I can’t imagine you’d be invited back very quickly.

    To me the article suffered from the most basic problem of science communication – who is the audience? Which is really strange as all of the authors have extensive experience at the policy / science ‘interface’. It seemed more like an article dreamed up by a Nature sub-editor to attract page clicks (or alt metrics based on the number of blogs it triggers), rather than anything the authors would really think will spark engagement in the real world. To borrow a metaphor from the newspaper industry, we increasingly live in a world of ‘click-bait ecology’ Cheers Ian

    • Hi Ian,

      Yes I agree, I was also puzzled as to why an article essentially for non-scientists, but not really pitched at a level suitable for non-scientists, was published in a scientific journal. Regarding click-bait, I have very strong opinions about certain types of articles that to me seem to exist purely as ‘citation-bait’ with no real content, but lots of easily cited content (bold claims, unjustified statements about consensus, and assertions of the most important questions in a given field, being pretty good examples), but that is a topic for another blog.

      Cheers

      Dave

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s