By Joern Fischer
Different funding programmes have very different sets of priorities. These priorities typically reflect – more or less coherently – the priorities of the funding bodies. Having received funding from several different sources, and having reviewed applications for a range of additional bodies, I thought it might be nice to summarise what I perceive to be key “DOs” and “DON’Ts” in funding programmes.
- Reward strong track records relative to opportunity. Clever ideas and proposals are one thing, but the best indicator of future performance by far is past performance. That said, this needs to be judged in the context of people’s opportunities: if someone has come through one of the major research labs in a given discipline, her track record will look better. Similarly, if people have not been exposed to the Western education system, or come from a background where publishing is not encouraged, the track record may look weaker. Finally, someone’s track record will just about always look better through time. Unless this is taken into account, funding systems become biased towards just funding “the big guns” – which stifles innovation and encourages the building of research empires. I like a motto that I have seen used by the Humboldt Foundation: “we fund people not projects”.
- Don’t over-rate any single performance indicator on track records. I have heard people say that to make it through natural sciences panels of the ERC, you should have Nature or Science papers. If that is so, this is a major worry. These journals favour some kinds of excellence (e.g. within disciplines), while rarely acknowledging other kinds (e.g. interdisciplinary). It is vital to not fall in love with any single performance indicator – neither h-index, nor particular journals, nor numbers of publications. For an excellent scientist, the combination should look impressive. I am particularly sceptical of indicators based on how much funding has been raised. Once you have funding on your CV, it gets easier to attract more funding. This, in its own right, says that you are a good “grantsman” – it does not say anything about the outcomes of your science.
- For early career applications: consider references. References work. If you see three references for someone you know a lot more about this person than if you don’t. If, on top of that, you get the people to tick boxes as to whether a given person is in the top 1, 5, 10, 25% bracket of their cohort, you will immediately get a sense for someone’s relative quality as a researcher. Again, references should be considered in context, but they are very worthwhile, especially for early career researchers.
- Reward people examining complexity instead of testing hypotheses. Hypothesis-testing is rewarded in our journals and most funding schemes. While this is well suited to some kinds of problems, many of the bigger challenges of our times (especially in a sustainability context) relate to complexity. The reductionist approach of hypothesis testing is often not overly useful in this context. The cruel thing is that it is much easier to sell and “elegant” set of hypotheses than a “messy” set of ideas for how to unpack complexity. Overall, I’d like to see more projects deal with complexity, and hence, I would like to see this funded more widely.
- Reward genuine interdisciplinarity. Many funding schemes claim that they encourage interdiscplinarity, but mostly, very shallow attempts at this are enough to satisfy the funders. Interdisciplinarity is not, for example, when social science provides numbers for a box in a flow chart. And when molecular biologists work with field ecologists this also is not exactly radical. Genuine interdisciplinarity means people working together who have actually different backgrounds and understandings. In the context of complexity, it’s usually in these situations that insights emerge. Notably, to work, interdiscipinarity requires certain means of integration, and not all excellent scientists are necessarily good at integration.
- Encourage methodological and epistemological pluralism. Real interdiscipinarity, including in a sustainability context, demands a pluralistic worldview. There are many ways of approaching problems, and many answers to the same question. Yet, things get interesting when multiple perspectives identify similar directions to solve a particular problem. Pluralism in science demands openness in the review process of funding bodies.
- Demand “impact”, but be open to this being generated in many different ways. People think of impact as journal articles, as policy influence, or as stakeholder engagement. All three – and more – are worthwhile ways of having an impact. I think it is worthwhile to ask proponents of a new project how they intend to have an impact: but it makes little sense to squeeze this all too tightly into narrow categories. Many projects now have stakeholder workshops because they have to, or write policy briefs because they have to. I’d encourage a more open and diverse way of thinking about impact.
- Minimise reporting procedures. This can’t be over-emphasised. It’s worth noting that EU-projects get audited stringently, while projects by the Australian Research Council only have to report the bare minimum (or not at all). And whose researchers are better, Australia’s or the EU’s? Considering its small size, Australia regularly punches above its weight. Researchers are not babies, nor are they highly corrupt compared to many other sectors in society. Especially when focusing on excellence as a selection criterion, there is little need to have stringent reporting procedures: it just wastes resources.
- Don’t reward networking activities without a purpose. Many projects include workshops or small conferences. Unless there is a clear purpose for such events, I see no benefit in these. The last few years have seen a rapid increase in such events (at least that’s my impression), and I don’t think that research quality, as a result, has increased. I’d be sceptical about such events as a funding body.
- Provide funding for carbon-neutral transportation. In line with the previous point, a lot of sustainability research still includes a lot of travel. For example, in my current research, we fly people back and forth to Ethiopia. Modern sustainability research should minimise such trips. Where they are needed, it would be great to see funders explicitly encourage offsetting the carbon footprint generated. Not flying is better than offsetting, but offsetting via accredited schemes is probably better than not doing anything. This kind of “walking the talk” is a major problem in sustainability science still, and I think a more critical perspective is needed on the limitations and impacts of transportation in the name of knowledge generation.