By Henrik von Wehrden
1) Scientists are not Johannes Gutenberg
Journals have instructions to authors that state how the manuscripts should be formatted. This is generally a great idea, yet one might wonder why they often differ? In the process of a paper being published, one might receive rejections, and thus have to move on or down to another journal without changing much of the text. Still, this may take hours, since references need to be reformatted, docs need to be sliced into parts to have an extra figure file, etc. This process is nonsense and a waste of everyone’s time. We do not have time for this, and we may be overqualified, although the results often prove we are not (or at least, that includes me). Is this really necessary? Some organizations, such as the ESA allow you to submit a single manuscript file, double-spaced with lines numbered throughout. A few more details, and you are done, all submitted. It would be even better to make a preliminary layout, with figures and tables included in the text, and that’s it. It would save a lot of time, and nerves. Scientists should work as scientists, not as letterpress handlers.
2) Big names trigger big papers – per se?
Just recently I reviewed – and rejected – a paper from someone who is supposed to be a big shot in his field of knowledge. It took me quite long to get over his name and focus on the actual work, which contained many errors. I believe my decision was justified. Still, it is puzzling that it took me so long. Some journals have shifted to double blind reviewing, yet most journals in ecology and related fields let the reviewers know about the authors, while they only know the reviewers whenever a reviewer chooses to reveal his identity. Reviewing should be double blind, always. Otherwise we do not tackle a lot of problems, like gender issues etc. Personally, whenever I bump into problems while reviewing a paper, I check out the authors. This then usually makes me wonder how they could make this mistake, or oversee this error, since they already got good papers published. I would like to have this burden lifted off my back. In times of Google one might argue that I would find them anyway, since the manuscript is listed as “submitted” on their homepage, yet this is their decision, then.
3) Great editors vs. subjective gatekeepers
I firmly believe that most editors do a great job of moving science forward. They are underpaid, understaffed, and often targets of emotional attacks. Still, we all know that sometimes we did not understand a certain decision, and spit out the name of a specific editor in anger. How shall we solve this, how can we move beyond subjective or personal decisions? I suggest that we add another level of control. The journal could publish key information on their review process. Beside impact, which you always get, and turnover rate, which you sometimes get, there is more. How about reviewer decisions in categories? And the resulting editor decision? And the re-submission decision? Maybe even something more detailed: How about a rating of the authors to evaluate the reviewing process, also in case of rejection. Some decisions are fair, others are not. Of course you would have irrational complaints, but if lets say 70 % of authors complain about a certain editor, it’s a gatekeeper, and the gate locks a dungeon. We cannot claim to conduct objective science, which is evaluated on purely subjective criteria. I strongly believe that good journals would benefit from another level of evaluation.
….to be continued