By Joern Fischer
Some of you may know of the Faculty of 1000. It’s a bunch of supposed “experts” who recommend papers that might be interesting to others in the field.
The Faculty of 1000 has recently launched its own research journal for the life sciences (i.e. including things like conservation biology but not including truly broad conceptions such as sustainability science) — it’s called F1000 Research. This journal is pretty different to most (all?) others out there at the moment.
When you submit your paper, it immediately goes online. All data that was part of the work (when reasonable/ethical/legal) needs to be provided, too. Next, after it’s already online people can review the papers and say they are basically sound or not sound. If something does not get any “basically sound” ratings, it gets taken off again. If it gets “basically sound” assessments, it is then possible to write in-depth reviews for the paper. The authors can then consider those reviews and write a new version of their manuscripts. So, that part is like normal peer-review in a way, apart from the fact it’s all completely open and transparent; including who reviews; and the whole world can witness the whole process.
From what I understand — and I’d be interested in comments on this — the initial “basically sound” assessment can only be made by members of the Faculty of 1000. But then detailed peer reviews (I think?) can be submitted by anyone. I am not sure though if I understood this part correctly, so if you can confirm this (or otherwise), please use the comments field below.
And the whole thing is open access, so anyone can read it.
In many ways this sounds kind of neat. What do you think? Is this the way forward? Should we be publishing there, even though there’s no impact factor yet, and few people know of this journal? Is it kind of like PLoS One, or is it way more radical and hence better or worse?
I am trying to make up my own mind about this new journal, and for that reason, I’d be really interested in what readers of this blog think. Thanks in advance for commenting!
I’m keen to try publishing with F1000, but I can’t speak from any authority on whether it’s a good idea… 🙂 I think the model is much preferable (and echoes something our more physical science colleagues have had for years in arXiv and our social science colleagues have, to a lesser extent, in SSRN, though in both of those cases, those are “repositories” and not journals. So I suppose the main question to me is whether or not the F1000 represents an improvement on these two systems, which each seem to have very little gate-keeping (e.g., initial review of soundness) compared to F1000 but nonetheless have come to be regarded highly (though this probably reflects the fact that many good researchers have posted in those forums, rather than the idea that everything in those forums is good).
On the surface, it would seem like a *journal* that shared some key commonalities with the (arguably quite successful) models of arXiv and SSRN would be a great thing. I certainly support the transparency. The question would seem to be how career gate-keepers–grant review panels and tenure review panels–come to view this type of thing, which in part depends on who publishes in it and how much attention it gets and how good it is, which in part depends on how people up and down the reputation chain think about it, which in part depends on… 🙂
One thing I was unsure about is if it is only F1000 members who can do the peer review — that would seem “wrong” to me. I’ve found that unclear in the bits I have read so far. Does anyone have a clearer understanding than me on this?
HI, I’m the Outreach Director for F1000Research and happy to clarify:
The initial check that all papers receive before they go online is done by editors in-house. This is a very basic check, to make sure the submitted paper is about biology or medicine, and written in English. They also check whether all data is included with the paper (a requirement!), and they get back to the authors if something is missing or not clear.
These in-house editors also find reviewers. Some of the reviewers may be F1000 Faculty Members within the paper’s specific discipline, but many of them are not. You can easily tell if a review was written by an F1000 Faculty Member by clicking on their name. In their profile you’ll see their status at F1000. (If they aren’t linked, they are not Faculty Members. At the moment I believe that all the ones that *are* linked are Faculty Members, but that may change in the future.) Although not all reviewers are F1000 Faculty Members, they are all invited by the editors. Not just anyone can review papers.
These reviewers are the ones who then determine whether the paper is “sound science”. It’s a phrase that you may know from other publications as well, and it just means that the research doesn’t have to be paradigm-shifting and groundbreaking, but it can be a study with negative or null results, or a replication or refutation study. That’s all perfectly sound science, but we do rely on experts to identify whether it is sound or not.
So the detailed peer review *is* the step where “sound science” (vs. “important” science) is determined. The initial check is done to make sure it is fit to send out for review in the first place. (It’s very similar to the process at other journals, only at most journals you can’t see the paper until much later in the process.)
Thank you for the post and for your interest in our new model of publishing. I just wanted to clarify a couple of points and queries raised in the post and the subsequent comments. The pre-publication checks are done by F1000Research internal staff and are checking for things like plagiarism, ethical issues, readability, making sure we have all the data and adequate protocol information, etc. If articles pass that check, then the article is formally published but clearly labelled waiting peer review.
The peer review process, which is post-publication and open (as in the names, affiliations and all comments from the referees are published), is conducted by invited expert referees (as normal) – these can be members of the F1000Research Editorial Board (http://f1000research.com/editorial-board) but they certainly do not have to be on the board (and many in fact are not). We check all names carefully before they are invited for potential conflicts and they are also asked to publicly declare any conflicts when they referee. Additionally, anyone who provides their full name and affiliation for publication, and who is a researcher, can also comment on the article (these comments are clearly labelled as distinct from the referee reviews).
Although you are correct that we do not yet have an Impact Factor (new journals need to have published for a least 2 years to gain one as the calculation is based on the citations of the previous 2 years’ worth of published articles), we have been accepted for indexing in PubMed, Scopus and other bibliographic databases – articles are indexed once they achieve a specific level of positive peer review (2 ‘Approved’ statuses, or 2 ‘Approved with Reservations’ statuses plus 1 ‘Approved’ status). Many high-profile authors have already published with us, many have been funded by the major funders, we have a very highly-regarded Advisory Panel supporting us (http://f1000research.com/advisory-panel), and we have received general support for the concept from the likes of the Wellcome Trust who published a blog from us on our launch: http://blog.wellcome.ac.uk/2012/02/02/f1000-research-post-publication-peer-review-and-data-sharing/.
We would of course be very pleased if you were to test out our publishing model – if you wanted to submit something and then write a post on how you found the process afterwards, then we would be happy to provide you with a submission charge waiver.
Thank you very much for both of these comments, dear F1000 crew! That all makes it much clearer — good to see you’re shaking up the system. Thanks also for the offer to waive submission charges … I’ll certainly consider it.
For no reason other than that it is an interesting alternative perspective, readers may find this very critical view on F1000Research interesting:
http://scholarlykitchen.sspnet.org/2013/01/15/pubmed-and-f1000-research-unclear-standards-applied-unevenly/
For some reason I can’t comment on the Scholarly Kitchen blog, but it seems like his main issues are a) sloppiness/uneven application of standards, for which he basically has evidence that some guidelines do not appear to have been uniformly followed but little evidence that it is pervasive; b) self-selection of reviewers, which does not actually appear to be the case (though perhaps F1000 could be clearer about this), and c) existing peer reviews do not appear to have been deep & searching (from his viewpoint). On the last point, I only wonder how many journals and pubs he’s been involved with; I have seen no small number of 1-2 sentence reviews come back from the highest-ranked journals. There is certainly no lack of reviewers within the opaque form of the process who are too cursory or peremptory. It would seem like his argument could be turned on its head–the seeming peremptory reviews could be a warning flag to those looking at the article, to judge for themselves rather than a process where the level of concern and thoroughness of review is unknown.
Though it does appear to be an open question of what happens to a “rejected” piece that one doesn’t care to “re-submit” to F1000 in particular, or how many times one can effectively re-submit if there are serious critiques of your work.
Just to clarify a couple of points on that Scholarly Kitchen post: I clarified the suggested issue of uneven standards with regards what counts as indexed in a subsequent post: http://blog.f1000research.com/2013/02/18/points-of-clarity/ as I realise it is confusing; in summary, it relates to the fact that the level of positive review required for an article to become ‘indexed’ in 2012 became more stringent once we received approval for indexing from PubMed. We couldn’t retrospectively amend those we published in 2012 and so there is a difference in what counts as ‘indexed’ between articles published across the two years.
Self-selection of reviewers – authors do suggest referees (as in fact happens on many other major journals, e.g. BioMed Central) but we do check the suggestions carefully and often have to reject some of their suggestions.
Peer reviews – we took the comment about short reviews on board and actually more recently, the length of our reviews has generally now increased. As you say, review length is normally variable – it is just that the reader doesn’t normally get to see this.
Thanks to the F1000 staff for their engagement on this. But something still very much bothers me — it appears, from my read of the site, that you have to pay the publishing fee before the open peer-review process, and if indeed once it passes initial validity check but before review it is “formally published but clearly labelled waiting peer review”, there is the possibility that it can be published but subsequently rejected–with the author now having paid for the privilege of being unable to submit their rejected ms elsewhere (as it would be a “duplicate publication”). I would very much like to understand what happens to a “fully” rejected piece, as it appears that it stays “published” but is not indexed and perhaps not linked to through the site-based search. Is this the case? Do rejected publications cost the author(s) $1000 to be locked in, posted on-line, without possibility to re-submit somewhere else?!
Sorry if I’m missing an obvious point here, I searched around for a bit and could not easily find what happens when the process doesn’t go so well for the authors and the two “Approved”s are never received.
Just to re-iterate some of this comment: A big thank you from me, too, to Rebecca for engaging with this blog post! We are definitely in agreement that new models of publishing are needed; and I commend F1000Research for trying such a new model. Trying something new is always hard — I remember PLoS One initially was considered a “junk journal” by many but by now, there are many very good papers in it. That said, of course, part of the journey is clarifying points such as that made by Jahi above… so an answer indeed would be quite interesting.
On a personal note, much of what I am now engaged in that is reasonably “important” is a bit interdisciplinary, and so the straight life-science focus of F1000Research doesn’t fully suit me (I’d prefer, for example, PNAS Sustainability Science and paying for open access there). But there is a section for conservation in F1000Research, after all, and that leaves a lot of options.
Always happy to try and clarify points of confusion (and amend the site to make things clearer). The ‘Not Approved’ peer review status is only if the reviewer thinks the article is actually just ‘bad science’. It is very rare that an article is so badly done that all the reviewers will give that status, and if they do, it is likely that the work really has been done badly. In that scenario, you really don’t want such work propagating (currently, almost everything can get published in a ‘peer reviewed journal’ if you are persistent enough as an author, wasting the time of readers and numerous sets of referees down the chain of different journals it was submitted to). Authors can revise their manuscript as many times as they wish and at any time they wish, and new referees can be invited for subsequent versions if the authors feel that the initial referees simply had a skewed position against their work and hence there is no reason why any piece of good science wouldn’t get Approved and hence indexed.
As you say Joern, we are a life science journal but in the broadest sense, and have a whole section covering ecology, evolution, conservation and climate change etc and we do add new related sections at the various boundaries of biology and medicine with their neighbouing disciplines as we receive content in those areas.
Relevant to our discussion here, over among our colleagues in linguistics and economics (and cancer biology and “omics” and on…):
“…of course there’s a spectrum of error, delusion, and fraud, from simple coding errors to hidden choices about data exclusion, data weighting, modeling choices, hypothesis shopping, data dredging, and so forth. The current system of peer review, although it often delays publication for two years or more, does a very bad job of detecting problems on this spectrum. A more streamlined reviewing system, with insistence on publication of all relevant data and code, and provisions for post-publication peer commentary, would be much better.”
–Mark Liberman, “Importance of publishing data and code“, Language Log, April 18, 2013 @ 5:11 am
Thank to Prof. Fischer for initiating this discussion, to AgroEcoProf for the interesting comments, and to Faculty1000 staff for the useful information! I support the Faculty 1000 initiative and especially appreciate the transparent review process. Good Luck!
Pingback: Looking for an alternative perspective on food and biodiversity? | Ideas for Sustainability
Pingback: Two New Pieces Out on Food Sovereignty! (At Oxford Handbooks, and F1000Research) | AgroEcoPeople
Reblogged this on Twin Lens Abstract and commented:
Very interested in this – currently preparing a paper and trying to decide between PlosOne (which is more established) and F1000 (who have approached us about showcasing our proposed publication) , and leaning towards F1000, despite the lack of impact factor (really, do they still matter? Are individual citation metrics not more useful?).
Will keep you guys posted!
Hi Rosie — note that F1000Research is now indexed in the Web of Science; which means it will get an IF within 2 years from now (or so)! So … basically, it’s a few years behind PLoS One in terms of being indexed, but otherwise is now also indexed by “the giants”. — J.
Great! I think F1000 is very innovative, and hope open access is the future of publishing!
Pingback: The Human Release Hypothesis for invasive species | Ideas for Sustainability