By Joern Fischer
When you do research, you should publish it. And if you think your research is any good, you want to publish in such a way that people will actually read it. This means, you want to publish it in a “good journal”, rather than in Proceedings of The Unimportant Second-Rate Scientists (or say, the Journal of Universal Rejection).
Looking at my own papers, and especially recent ones, rejection without review has become increasingly common. I don’t think my science has got any worse though … in fact, I’m pretty sure that it has got better over the years — methodologically, and in terms of the questions it addresses. But, invariably, getting published has got harder.
The two main reasons for rejections that I have encountered in my own work in the recent past are “not global enough” and “methodologically problematic”. For various reasons, I find both of these extremely unsatisfying.
First of all, when it comes to interdisciplinary sustainability science involving ecology, lots of people say we need it — but few people actually do this kind of work. Yet, it seems that it is easier to publish yet another theory or framework on some kind of global something-0r-other than a high quality, place-based case study. Case study work, from my experience, can lead to rejection even if it is done well and if clear links to a global discourse are being made. I see this as fundamentally problematic at a time when sustainability consequences play out in places, in landscapes: we can’t address these problems “globally” or at least not all of them, and not only by looking at the global scale. Place-based research (and insights) are vital. My own reading of much recent literature is that we are getting swamped in global analyses, preferably meta-analyses, and ever more complex models — while actual insight about real-world systems counts less and less.
This takes me to the second least favourite reason for rejection: methodological flaws. While fundamental methodological flaws do exist, most of the time, such judgments are entirely subjective. What’s flawed to one person is brilliant to another. As a reviewer of many papers I know that I can recommend rejection for just about anything if I want to — convincingly. Reviewers are highly skilled at “finding what’s wrong”, but few reviewers these days hold a paper at arm’s length, and judge whether overall, the paper is good or not so good.
Together, these two trends (“global” and “methodologically strong”) mean that innovative, place-based work is hard to publish. What works is following standard procedures (which counts as good methodology), and what works is providing global maps, essays or meta-anlayses.
To my mind, quality control is well and truly, broken in conservation journals, and in sustainability-oriented ecological journals. Much of what is rejected is “good” and much of what is published is no better than what was rejected. The race for scarce space in journals has elevated a single criterion above all others: the subjective opinion of those involved in the peer review process.