In the words of George Box “all models are wrong, some models are useful”, what I want to highlight here is not so much the ‘wrongness’ of many models, but rather their limited usefulness (and usefulness is an explicit goal of sustainability science).
(A model pipe)
My starting point is that all science is modelling. Science functions by the codified abstraction of the vastly complicated real world, those abstractions are, in the broadest sense of the term, models. One of the main means by which scientific models simplify reality is by creating rules and assumptions that shape what is included (and excluded) from a particular model. This model bounding seems to me perhaps the most crucial step in the scientific process as it constrains and shapes the nature of the results that science generates. Model bounding seems to receive relatively little attention within the scientific process. Yet as Lawrence Slobodkin said “any simplification limits our capacity to draw conclusions… How we simplify can be critical. Careless simplification leads to misleading simplistic conclusions”. My contention is that we are bounding the world to create simple, elegant models that are, not very wrong, but also not very useful.
Here I use the land sparing/sharing model as an example, not because I think it needs further critique, but because it provides a nice example of some of the issues with simple models in sustainability science. Equally I could talk about ‘planetary boundaries’, ‘sustainable intensification’ “IPAT” etc… The Sparing/sharing model bounds the world in terms of a physical space providing two ‘goods’: biodiversity and ‘food’ with food production framed as an intrinsic good. Only those two goods are of concern, and land can be allocated in a way that ‘optimizes’ the delivery of those two goods. Everything else in the world is effectively outside of the model. While many sparing/sharing analyses are elegant, with well thought out research designs and sophisticated statistical analysis, there are many things that these models cannot tell us. They cannot tell us who has what preference for those two goods. Or why we have the current allocations of land (and their related goods), or how those allocations are likely to change in the future. The Sparing/sharing model cannot tell you about the ultimate benefits that that food production provides (e.g. food security), or if more production is even a desirable outcome? The sparing/sharing model is bounded in terms that effectively exclude addressing such issues. In particular the ethical component of the model is fixed (i.e. what is a ‘good’ and how to judge different states of the world are defined by the model boundaries).
(A pipe model)
So why then are such simple, elegant models so popular? In part I think the answer is that they provide a comfortable, neat and clear narrative (e.g. food security and biodiversity management as a solvable technical issue rather than a potentially intractable ethical/social issue). In addition, simple models can be rigorously operationalized. You can plug data into the models and produce, fully replicable, generalizable, ‘objective’, empirical research, baring all the hallmarks of ‘good science’. Finally, in the age of ‘policy relevant’ and ‘solution oriented’ science such models do provide solutions (useless as they may be). In contrast if your starting position is that we need models that are more complex, that deal with power, ethics, multiple values, even multiple system goals, then you are unlikely to create a clear narrative, replicable, rigorous empirical research (“you said all these things matter but most of them you don’t measure”) and are less likely to create the solutions the science itself is calling for.
So what does all this mean? Perhaps that we need to spend more effort judging the quality of science based on how well the model is bounded to address a particular societal rather than how well operationalized the model is (regardless of how appropriate the model is). It raises the question: can you do ‘good science’ with ‘bad models’? My gut instinct is that we probably need models that are more complex and therefore ‘wronger’ (less rigorously/fully operationalized) but more useful for addressing complex real world problems.