“Not global enough” and other reasons for reject-without-review

By Joern Fischer

When you do research, you should publish it. And if you think your research is any good, you want to publish in such a way that people will actually read it. This means, you want to publish it in a “good journal”, rather than in Proceedings of The Unimportant Second-Rate Scientists (or say, the Journal of Universal Rejection).

Looking at my own papers, and especially recent ones, rejection without review has become increasingly common. I don’t think my science has got any worse though … in fact, I’m pretty sure that it has got better over the years — methodologically, and in terms of the questions it addresses. But, invariably, getting published has got harder.

The two main reasons for rejections that I have encountered in my own work in the recent past are “not global enough” and “methodologically problematic”. For various reasons, I find both of these extremely unsatisfying.

First of all, when it comes to interdisciplinary sustainability science involving ecology, lots of people say we need it — but few people actually do this kind of work. Yet, it seems that it is easier to publish yet another theory or framework on some kind of global something-0r-other than a high quality, place-based case study. Case study work, from my experience, can lead to rejection even if it is done well and if clear links to a global discourse are being made. I see this as fundamentally problematic at a time when sustainability consequences play out in places, in landscapes: we can’t address these problems “globally” or at least not all of them, and not only by looking at the global scale. Place-based research (and insights) are vital. My own reading of much recent literature is that we are getting swamped in global analyses, preferably meta-analyses, and ever more complex models — while actual insight about real-world systems counts less and less.

This takes me to the second least favourite reason for rejection: methodological flaws. While fundamental methodological flaws do exist, most of the time, such judgments are entirely subjective. What’s flawed to one person is brilliant to another. As a reviewer of many papers I know that I can recommend rejection for just about anything if I want to — convincingly. Reviewers are highly skilled at “finding what’s wrong”, but few reviewers these days hold a paper at arm’s length, and judge whether overall, the paper is good or not so good.

Together, these two trends (“global” and “methodologically strong”) mean that innovative, place-based work is hard to publish. What works is following standard procedures (which counts as good methodology), and what works is providing global maps, essays or meta-anlayses.

To my mind, quality control is well and truly, broken in conservation journals, and in sustainability-oriented ecological journals. Much of what is rejected is “good” and much of what is published is no better than what was rejected. The race for scarce space in journals has elevated a single criterion above all others: the subjective opinion of those involved in the peer review process.

5 thoughts on ““Not global enough” and other reasons for reject-without-review

  1. Very good points!
    I must agree with the problem of publishing case study research. I have very limited experience as a researcher compared to yours, but I am finding the same barriers. I ask myself: “How can we publish a global solution to context-dependant problems?”
    I can only persevere in submitting to good journals and keep fingers crossed.

  2. “While fundamental methodological flaws do exist, most of the time, such judgments are entirely subjective. What’s flawed to one person is brilliant to another.”

    About methodological flaws: “Friston, K. (2012) Ten ironic rules for non-statistical reviewers NeuroImage, 61, 1300-1310” http://dx.doi.org/10.1016/j.neuroimage.2012.04.018 is an amusing read about creating stats problems where there are non.

    That being said, I don’t agree that judgments about methodological flaws are fundamentally subjective. Sure, there will be some borderline cases, but I think in most cases we should be able to agree on how to best analyze and interpret a problem. It would shake the very foundations of science if our interpretation of evidence was a matter of taste, wouldn’t it?

    I would say if one person things this is brilliant and the other person thing it’s wrong, then at least one of the two is mistaken!

    • Hi Florian, and thanks for your comment. I think in theory you are right. But in practice, I’m a lot less sure … people get obsessed about which kind of model selection is right, for example; they get obsessed about which kind of data collection is appropriate; they get obsessed about whether or not one needs to account for differences in detectability; or whether one needs to use species richness estimators; some reject all things Baysian out of hand … and so on. Of course, some things are plain “wrong” (like 2 + 2 = 5), but unlike you, I think such clear cases are uncommon in “good journals”. Most stuff that is submitted to “good journals” is somewhat good. And it’s essentially a matter of taste as to whether it’s “good enough”.

      Even PLoS One — where the sole criterion is methodological rigor — is not free from this problem. One of my now well cited papers got rejected there because of methodological flaws (had to do with the “correct” field survey method), before being accepted in another equally or more reputable ecology journal.

      So … I wish I could agree with you, because I would be less jaded if I could. But unfortunately, I don’t agree: I find it largely subjective, and a complete gamble — I’d propose that the more innovative the work, the more random the review outcome…

      The reason why I think this has got so much worse is that space restrictions in top journals have got very severe, meaning that reviewers no longer look to provide useful comments to improve papers; but instead have become hyper-critical and are happy to recommend rejections. And editors are grateful for this too, because they need to bounce lots of papers.

      Anyway, an admittedly cynical and unhappy view of the reviewing world, but to me at least, this is pretty much what it looks like now ….

      Thanks again for your comment!

      • Hi Jörn,

        I totally agree that papers are in practice being rejected for bad or wrong reasons, and that people are getting mixed up about silly issues.

        The thing I’m worried about is that from this it’s tempting to conclude that there is no “right” way to do an analysis and it’s all a matter of taste and politics anyway (not sure whether you were suggesting that), while I think it’s probably often simply a matter of at least one person being plainly wrong.

        Not sure what to do about this though, I agree that editors may often be too fast in bouncing a paper based on some hastily written comments, so we could say they should take more time to think about it and allow the authors to appeal regarding technical criticisms, but of course there is no time for that because we are publishing too many papers 😉

  3. Pingback: Paper recommendation: Rejecting Editorial Rejections Revisited: Are Editors of Ecological Journals Good Oracles? | Ideas for Sustainability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s