By Joern Fischer
So here’s an entry primarily dedicated to fellow publishing ecologists, conservation biologists and sustainability scientists. The new ISI Web of Science Impact Factors are out, for 2010. Within hours of them being released I had received an email from a colleague listing a few of the key ones. And of course I checked them out, in fact before breakfast … for my favourite journals, did they go up or down? Others also commented on the new impact factors as soon as they came out — clearly, it’s the kind of thing that makes scientists tick these days.
Like most other publishing sustainability scientists, I am also very aware of impact factors, and they influence where I submit my papers. The logic is obvious: if it’s worth writing, I may as well try to communicate it to a wide audience. Impact factors are a crude, but widely available measure of how widely read a journal is. This logic aside, our community of peers recognises ‘good journals’ largely by their impact factor — so if you want to have a strong CV, you should also try for journals with a high impact factor.
While this is all well and true, I suspect the means have become the end when it comes to impact factors. They are meant to measure the importance of a journal. But journals now use specific tactics to increase their impact factor; which effectively means the impact factor has ceased to be a useful ‘independent yardstick’ measuring the quality of a journal. (This wording comes from a paper by Landres in the late 1980s on environmental indicators, by the way.) I would argue that there is no inherent difference in the quality of some of the top journals. All of them have good papers, and all of them have not so good papers. But their impact factors often differ.
So what kinds of papers make for a high impact factor? Reviews, essays about global issues, analyses of global datasets. Those are the kinds of things that are relevant to a big audience, and they are imminently cite-able for lots of people. Not surprisingly perhaps, I would guess (though I have no quantitative proof) that the number of essays, reviews and analyses of global datasets has seemed to increase in the leading journals in the last few years. This might be because that’s the most useful kind of paper — or it might be that this is what journals are most keen to publish so they end up with a high impact factor.
The truth, I’m sure, is more complicated than my simple hypothesis. But I do think that the current impact-factor-obsession has made it harder to publish regional case studies. From an editor’s perspective, why publish a strong empirical paper about a certain region, when instead, a global meta-analysis of something or other takes up the same number of journal pages?
My argument is, whether true or not, that impact factors are part of a changing scientific culture; which is no longer driven enough by what is useful and good, but more and more by what is globally prestigious, methodologically ‘cool’ and sells well. The means have become the end too often.
Assuming I’m right, what should we do? To my colleagues, I’d suggest keep publishing in those journals with a high impact factor whenever you can because after all, that’s how you get read. If that means using a certain ‘cool method’ or another because that’s the current hype, perhaps that’s okay. But don’t bend over backwards either to get into the top journals every time. It’s healthy to ask yourself how your science is most useful, not just how it becomes most cited. In addition to those big global papers, we need work at a landscape or regional scale — which, incidentally, is the only scale at which meaningful stakeholder engagement is possible. Let’s hope the leading journals will continue to make space for such papers, too.