Models that are simple elegant and wrong

In the words of George Box “all models are wrong, some models are useful”, what I want to highlight here is not so much the ‘wrongness’ of many models, but rather their limited usefulness (and usefulness is an explicit goal of sustainability science).


(A model pipe)

My starting point is that all science is modelling. Science functions by the codified abstraction of the vastly complicated real world, those abstractions are, in the broadest sense of the term, models. One of the main means by which scientific models simplify reality is by creating rules and assumptions that shape what is included (and excluded) from a particular model. This model bounding seems to me perhaps the most crucial step in the scientific process as it constrains and shapes the nature of the results that science generates. Model bounding seems to receive relatively little attention within the scientific process. Yet as Lawrence Slobodkin said “any simplification limits our capacity to draw conclusions… How we simplify can be critical. Careless simplification leads to misleading simplistic conclusions”. My contention is that we are bounding the world to create simple, elegant models that are, not very wrong, but also not very useful.

Here I use the land sparing/sharing model as an example, not because I think it needs further critique, but because it provides a nice example of some of the issues with simple models in sustainability science. Equally I could talk about ‘planetary boundaries’, ‘sustainable intensification’ “IPAT” etc… The Sparing/sharing model bounds the world in terms of a physical space providing two ‘goods’: biodiversity and ‘food’ with food production framed as an intrinsic good. Only those two goods are of concern, and land can be allocated in a way that ‘optimizes’ the delivery of those two goods. Everything else in the world is effectively outside of the model. While many sparing/sharing analyses are elegant, with well thought out research designs and sophisticated statistical analysis, there are many things that these models cannot tell us. They cannot tell us who has what preference for those two goods. Or why we have the current allocations of land (and their related goods), or how those allocations are likely to change in the future. The Sparing/sharing model cannot tell you about the ultimate benefits that that food production provides (e.g. food security), or if more production is even a desirable outcome? The sparing/sharing model is bounded in terms that effectively exclude addressing such issues. In particular the ethical component of the model is fixed (i.e. what is a ‘good’ and how to judge different states of the world are defined by the model boundaries).


(A pipe model)

So why then are such simple, elegant models so popular? In part I think the answer is that they provide a comfortable, neat and clear narrative (e.g. food security and biodiversity management as a solvable technical issue rather than a potentially intractable ethical/social issue). In addition, simple models can be rigorously operationalized. You can plug data into the models and produce, fully replicable, generalizable, ‘objective’, empirical research, baring all the hallmarks of ‘good science’. Finally, in the age of ‘policy relevant’ and ‘solution oriented’ science such models do provide solutions (useless as they may be). In contrast if your starting position is that we need models that are more complex, that deal with power, ethics, multiple values, even multiple system goals, then you are unlikely to create a clear narrative, replicable, rigorous empirical research (“you said all these things matter but most of them you don’t measure”) and are less likely to create the solutions the science itself is calling for.

So what does all this mean? Perhaps that we need to spend more effort judging the quality of science based on how well the model is bounded to address a particular societal rather than how well operationalized the model is (regardless of how appropriate the model is). It raises the question: can you do ‘good science’ with ‘bad models’? My gut instinct is that we probably need models that are more complex and therefore ‘wronger’ (less rigorously/fully operationalized) but more useful for addressing complex real world problems.


5 thoughts on “Models that are simple elegant and wrong

  1. I, of course, largely agree with you Dave — great post! (Though one of my favorite other quotes about models is from Lewontin and Levins: “The best model of a cat is another cat. Preferably the same one.”)

    This post is tangentially related but makes some similar points:
    One specific quote: “For working researchers, papers are sources of data and information. For academics, they are accolades. They cannot fulfill both purposes equally well.”
    Another, “My point here is that any desire to be radically and truly intellectually honest and skeptical about one’s findings has to be largely internal, because there are not enough structural incentives. So instead of talking about what’s the best scientific approach, we often talk about doing what’s necessary to convince the reviewers or “getting it past the reviewers”. That language gives insight into how we are often thinking like academics rather than scientists.”

    The author goes on to talk about unethical cheating, but even below that level, I know, from very extensive experience now, that reviewers tend to want quantitative study for the kind of work I do on food systems analysis. My personal belief as a scientist is that the kind of quantitative analysis they want, even were it possible, would be more likely to obscure than reveal. One reviewer asked for quantitative evidence connecting neoliberalization with biodiversity loss: while one could try to proxy up something to “prove” this, I actually think a qualitative, synthetic analysis is a better way to go about this entirely, proceeding from logical statements and prior principles. At the least, it seems to me that–say–something correlating implementation of “free trade agreements” with red-listed species involves at least as many problematic assumptions, and does not “prove” a connection any more strongly than qualitative reasoning.

    So I guess I’m trying to use this to agree with you — and to point to more complicated, partially qualitative models as something that seem to be particularly disfavored currently, even though (I believe) they are likely more informative, and certainly more useful than precise, but dramatically simplified and necessarily incomplete models, at least in sparing/sharing terms. But it is very hard to defend or establish this, especially to reviewers who seem to be, at times, a priori ill-disposed to debating the nature of knowledge rather than accepting that quantitative, specific-but-maybe-not-useful study is, de facto, more rigorous.

    Something else that seems to be ignored, and is implied in your post: when trying to come up with “useful” research, even if we have a simplified model that provides a “useful” finding, the dual processes of translation (complex reality –> model; then extrapolation from model –> complex reality) often get ignored, even though they are, at some point, inherently judgment-based (non-rigorously-quantitative) decisions. (They have to be, because numerically testing what complexities may be safely assumed away is, at some point, an infinite regress problem.) All the assumptions that go into simplifying a model, and then assumptions that go into applying them, become magically more rigorous if the model itself is numerically precise. I feel like there is arguably a precision to be had by inductively using complex models to speak of complex reality… but that is a further discussion for another day.

    • Hi Jahi — it works for me … I just tried a search for “ecosystem service”, for example, and it came up with a list of blog posts. Can you check again, please, and if it doesn’t work, let me know what doesn’t work? Thanks!

  2. Hi Dave!
    While the quotation is indeed very quotable, and I agree with the overall statement of your post, as a statistician I have to comment on it. Georg Box citation is from statistical papers, and thus used out of context here, to me, as it is not about models of reality. I believe that he was surely speaking about statistical models, and hypotheses building in these domains (

    Building on his elaborations about Fisher (Ronald, not Joern) –one of the creators of modern statistics- we have to bear in mind that back in these days we were on a different level as scientists. Most “experiments” lacked any design or even the simplest assumptions and preconditions to begin with. As such it was more a call for rigorous approaches, and not really about models of reality. To repeat once more, there is a difference between statistical models and models of reality. Let us now assume that this is not the case, and use the quotation as you did for models of reality.

    While the thought to implement statistical approaches and protocols into modern models of reality (e.g. land sharing vs land sparing) is surely worth the effort as a learning exercise, it contains one core problem. Many of these models of reality are normative in their approach. These models of reality simplify the world into a pseudo-reality, partly (although not exclusively) to increase market value of this research. Scientists engaging in these model often sacrifice parsimony on the altar of marketing, to me. Society today calls for scientists to get out of the ivory tower and link to the real world. Scientists then marketing their research into marketable models of reality is partly a consequence of this. This is insofar regrettable, as even the most simple statistical concepts can be explained by simple examples, such as the lady tasting tea and the notion of probability (

    Models of reality are often overly simplistic today, and the world is certainly more complex than many folks in soft science want them to be. This creates counter-criticism. A problem is often to me, that today people criticize “all models as wrong”, while they never made any empirical analyses to begin with. I feel that this lack of statistical experience fires a large community of people talking about complex models. This is a good argument, why statistical models and models of reality should not be confused, since even most scientists are not able to recognize the difference. While complex models are difficult to define, I find them even more confusing if I want to define non-complex models. Pendulum swings, and one could from now on make one’s own little model about complexity of models in soft science. I welcome to engage in these discussions, yet I am sure that at the end Occam’s razor will stay unscarred as a rare testimony of how hard science may give us a hand, at times.

    Parsimonious greetings,


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s