This year we published a paper about how bats respond to different environmental gradients in a south-eastern Australian grazing landscape (Hanspach etal 2012 JApplEcol). The paper is a solid piece of work with a well-planned design, some nice stats, interesting results (not ground-breaking though) and quite readable . That’s at least what we thought before we first submitted it. During this specific publication process we had doubts about it various times, but read yourself how it went:
Journal 1: submitted April 2010 – Rejection without review
Journal 2: submitted June 2010 – Rejection without review: lacking generality
Journal 3: submitted June 2010 – Rejection without review: too bat-centric, too applied, lacking novelty
Journal 4: submitted July 2010– Rejection without review: does not match scope of the journal, lacking novelty, lacking relevant literature
Journal 5: submitted August 2010 – Rejection without review: lacking conservation relevance, lacking novelty
Journal 6: submitted August 2010 – Rejection after review (Dec 2012) – Hooray, we made it one step further in the review process. We were pretty happy and the arguments were compelling. It took the editors and the referees more than three months to reject us with arguments like: „More COMPLICATED analyses using multi-variables are used these days for such a study” [caps are mine], “not sure if sampling was adequate” [we had performed a pilot study to test sampling], “rephrase the sentence”, “some conclusions … may seem inappropriate” and so on. The two reviews had a total word count of 302.
(BTW, if you ever feel tempted to argue with an editor about the quality a particular review process he or she is responsible for, don’t do it. It is a waste time and energy…)
Journal 7: submitted January 2011 – Rejection without review: just not interesting enough
Journal 8: submitted January 2011 – Rejection after review: this time the comments were a bit more substantial than with Journal 6 – mostly criticizing quality of data collection (despite us using more acoustic monitoring data than the vast majority of comparable studies)
Journal 9: submitted May 2011 – Rejection with resubmission encouraged – Resubmission – Revision – Submission of the revision – Provisional acceptance – Submission of revision – Immediate accept (May 2012)
(All these journals have a more or less ecology/conservation focus and the 2010 impact factor varies between 3.2 and 4.9).
After that procedure, it feels always like random luck when you make it past the editor with your manuscript (random with a low probability – here 0.22 - to make it to the reviewers). And even if you make it, you may end up with hasty and low-quality reviews and the editor does whatever she or he likes with it (being unwilling to be challenged on it). Well, I don’t want to be too negative about the review process, I generally like the concept and with, e.g. Peerage of science, I see promising projects to overcome the shortcomings of it. Of course, what I don’t want to say that you just have to submit your manuscripts often enough, and then in the end it will end up somewhere (just by chance). And the journal that published our study had the highest impact factor of all we submitted it to.
So what? That happens to you all time? I hope not. Luckily for us, this was exceptional so far and I hope it won’t happen again.