A PhD nightmare: how a ‘safe’ paper turned into a ‘horror’ paper

By Ine Dorresteijn

Recently the last paper from my PhD has been accepted for publication. The paper describes the impact of current and potential future land-use intensification on bird species richness in Transylvania, Romania. Although the paper is maybe not groundbreaking, I always thought that it is still a relevant contribution to the scientific literature, based on our large field efforts, its statistical soundness and because it was well written. A solid paper. But instead, getting the paper published has been a tough ride. While we thought bats were difficult to publish (see our previous blog post on a rejection journey five years ago), we have now seen that birds can be even harder to get into journals. Ironically, this paper was considered the ‘safe paper’ of my PhD work. I was one of those lucky students that was part of a well-planned research project including great supervision. The bird work of my PhD was carefully planned and designed, was based on pilot studies and was set in a region rich in (protected) bird species. Very soon, however, my ‘safe’ paper turned into my ‘horror’ paper, with high levels of frustration, a shattered confidence, and – in the end – lots of sarcasm and laughter.

Here goes the story how my ‘safe’ paper was turned into my ‘horror’ paper.

Journal 1: Submitted Dec 2013, rejected with review Feb 2014: Lacking novelty and generality, and lacking clarity and focus of the analysis.

Journal 2: Submitted Feb 2014, rejected with review Mar 2014: Too broad discussion and lacking strong conclusions/management recommendations.

After these first two rejections, we made major changes to the manuscript. We narrowed down the manuscript considerably by deleting a part on species traits, and worked on the clarity of our methods section.

Journal 3: Submitted May 2014, rejected without review: Not general enough in concept, scope and approach.

Journal 4: Submitted May 2014, rejected with review Sep 2014: Lacking novelty.

Journal 5: Submitted Oct 2014, rejected with review Dec 2014: Lacking novelty, and lacking clarity in the methodology and results. As one reviewer put it: having a more complicated and complex design than other studies should not stand for novelty in scientific research.

By the time the paper was rejected 5 times I was pretty desperate and frustrated to hear over and over that the study lacked novelty. I figured that we couldn’t change that much on the novelty of our study’s outcome. However, another frequent critique was around the clarity of the methods and results, something I thought we could improve. Therefore, to give the paper a new and fresh boost, we received help from a new co-author. We re-analysed the entire paper focusing solely on species richness (taking out a part on bird communities), rewrote the entire paper for clarity and to put into a broader context, and even put in some pretty pictures to illustrate traditional farming landscapes. Now with our paper in a new jacket I was convinced we would be luckier in the review process.

Journal 6: Submitted Jun 2015, rejected with review Aug 2015: Methodology limited the study’s conclusion and its capacity to go beyond a regional example. For example, it was critiqued that the model averaging approach used poses limitations and regression coefficients should be used instead.

Journal 7: Submitted Aug 2015, rejected with review Sep 2015: Flawed study design which was deemed uncorrectable without significant reanalysis. Although reviewer 1 had significant problems with our study design, reviewer 2 seemed to be less unhappy: The study is well introduced (I particularly liked the introduction of traditional farming landscapes), the study design is appropriate, the analyses generally robust (although please see comment below), and the results clear, and the discussion well considered.

Journal 8: Submitted Nov 2015, rejected with review Dec 2015: Methodology – given our objectives and sampling design we used the wrong analytical unit.

Journal 9: Submitted Jan 2016, rejected with review Feb 2016: Lack of novelty, trivial findings and not taking into account the rarity of species (something we had excluded from the manuscript due to other reviewer comments).

Journal 10: Submitted Feb 2016, rejected with review June 2016: Goal of the work not addressed.

Journal 11: Submitted Sep 2016, Minor revisions Jan 2017, Submitted revised manuscript Jul 2017 (after maternity leave), Accepted Jul 2017. Hurrah, the reviewers liked the paper a lot!!

Having had 10 rejections on this paper, mostly after review, means that approximately 25 (!) reviewers were involved in getting this paper published. Importantly, of those reviewers probably half of them could have been satisfied with major revisions. Like in the example under journal 7, usually one of the reviewers did not dislike our paper that much, but I guess one more negative review is enough for a rejection. Even more interesting, we published two similar papers on butterflies and plants from the same region, based on the same study design and using similar analysis. While this paper on birds got continuous critique that our methodology was not clear, flawed, or limited, these other two papers on plants and butterflies received positive constructive reviews without much complaints about its novelty and/or study design. I am still not sure why this paper had such a hard time, is it just birds or something else, but I am happy it is finally out there! Enjoy the reading and you can always contact me for further clarifications on its methods or novelty J.

 

 

Advertisements

9 thoughts on “A PhD nightmare: how a ‘safe’ paper turned into a ‘horror’ paper

  1. a pity that you had to get rid of the species traits and bird communities – that’s the most interesting pieces! I hope you were able to publish those somewhere 😉

  2. Wow… That indeed sounds like a nightmare. But you were still lucky in one respect – I was surprised while reading your post that your reviewers had always been very quick. I have had a repeated experience with the first stage of the review process taking a couple of months (up to almost half a year). Thumbs up for perseverance, though!

    • From this post, I understood that we should not give up. Always we have to struggle to reach our goal. At the end, HARDWORKING rewards! Thank you for sharing your experience Dr. Joern Fischer.

  3. I sympathize with you, of course. This nightmare is a consequence of a crowd-based system that is currently approaching its limits in many senses.

    But I think we have to go a little deeper and ask ourselves why these situations arise. You describe these journals in a way that they are perceived by many scientists, young and also not so young ones: as if there was a big and opaque authority that somehow cannot figure out what appropriate standards are.

    But you must recognize that these journals are … us, or most of us, that is those of us who use part of our time for editorial roles including the meticulous reviewing of tens of papers submitted by each scientist every year. How do you want to ensure that truly common standards are established and maintained by the diverse community we collectively represent?

    I do not want to excuse the weak performance of the system, and I am the first to admit that fundamental changes are necessary. But also in this sense, I think we need to preserve diversity. Personally, I prefer to have some editors and reviewers that are way off from what I consider acceptable standards of novelty as opposed to having a rigorous and formalized authoritative institution that decides, once and for all, about quality in scientific papers.

    • Dear Wolfgang — I agree! Peer review is currently the best “bad system” we have. I actually don’t think the problem is so much the review, or particular editors or reviewers, but as you also indicate mainly the ever increasing volume that must be “rejected” somehow … based on novelty, methods, etc — some of which just ends up basically a judgment call. From a systemic perspective, I take this as an invitation to all of us — especially senior, established scientists — to publish something when it’s really worth saying, and not otherwise overload the system with “junk” just for the sake of publishing it. For junior researchers, this is of course more difficult, because they simply need to get stuff out in order to show they can do it. And for them, persistence is key. But some “discipline” — focusing on producing quality work, rather than thinking about how to get yet another trivial thing out just because we can — would help all of us, I think. — J.

      • I think, beyond senior, established scientists publishing “quality” over “quantity”, I increasingly think it is also up to senior scientists who end up shaping how junior scientists are reviewed to engage in perhaps “academic civil disobedience” by advocating for and supporting quality over quantity when reviewing junior academics. Looking more favorably on a young scholar with a few, solid publications rather than (in my experience so far) increasing focuses on number over almost everything else (with the flawed impact/citation measures and perceived prestige playing in along with number). It seem to me this is something that can be done, to some extent, immediately; it just requires some discomfort on the part of more senior scientists, in pushing against their administrations and peers. It continues to be ridiculous to me that, in some circles, the prestige of the journal still stands in for quality of the work itself; if we “don’t have time” to directly evaluate the content of the work for ourselves, then I propose that we apparently “don’t have time” to do the scientific enterprise properly! In other words, we must find the time; it is ridiculous to judge our peers and mentees based on the average number of times other articles in a journal they publish in have been cited–an often spurious indicator of individual quality if there ever was one, as most of us acknowledge. We can advocate for and judge grant and job applicants and junior colleagues by standards we consider more appropriate. That should help, in part, ramify down to what kind of things people seek to publish (and how much of it they do).

  4. Hi Jahi – in response to your comment above. Yes, I agree, we all play a role in collectively identifying, slowing down, and gradually eradicating the nonsense, while somehow also “surviving” in the existing incentive system as it is. Leading by example and encouraging “the next generation” to do likewise is very likely to be part of this. Thanks for your continued engagement! – J.

    • Interesting debate, and it should perhaps be moved to a separate thread. Just briefly again: I did my PhD in 1986 and this was a transition period where our professors were surprised that journal publications became important even for students. I think I had huge advantages back then because I had ONE paper in a reasonably reputed journal. Maybe what is needed now is a new transition, maybe a “post-journal-paper-hysteria” transition, but it needs to be orchestrated in a way that does not put at risk the career opportunities. And having been on many selection committees, I know that established rituals will be hard to change!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s