By Joern Fischer
Among other points, and together with various colleagues, I have recently argued that “more” in the research world may not equal better. Part of this argument related to large research consortia and large research groups — both characterised by “more” people. Before being overly negative about such large groups, I’ll repeat a disclaimer I made previously: some large groups are run exceptionally well; and large research consortia can help to foster new linkages between people (often from different countries) who may never have met otherwise.
This disclaimer aside, I’ve had a simple (perhaps stupid or simplistic?) idea for an analysis. I’m not going to actually do this analysis anytime in the very near future due to a lack of time. But if someone is out there and keen to pursue this idea … go for it, and I’d be interested in the findings!
The idea goes like this. Presumably, there are two reasons for why we’d want to use performance measures. One, we want to reward good science; and two, we want to avoid wasting money, i.e. spend it efficiently. On the notion of efficiency, Lortie et al. have recently shown that rewarding the largest groups may not be the best way forward. Thinking a bit more about what they did, and combining it with my own purely intuitive bias that I’m somewhat skeptical about large groups … I wondered if it would be possible to do the following:
1. Look up a bunch of research groups that are clearly identifiable as groups. If you look at my website (don’t bother, but you could), for example, you find that my “team” is listed there.
2. Count the people in such research groups.
3. Look up the number of citations per meaningful unit of time (e.g. year) for the team leader of such groups.
4. Do this for a large sample of sensibly chosen people/teams.
5. Calculate the average number of citations per person (perhaps count PhD students as equal to half a postdoc?).
6. Graph the average number of citations per person as a function of team size.
If the only costs of research were staff costs, this would give an indication of “optimal” team size — maximum “citation bang” per person. (Am I wrong?)
My prediction: at some point, the curve will bend down; overly large teams will, on average (exceptions will be noteworthy!) perform not well per person.
That is my prediction. I have not done this analysis, so may be completely wrong. I’d say it would need to be done systematically, somehow, to be worth considering. And then you can publish it in Oikos … 🙂
Is this complete nonsense? Anyone willing to invest the energy? Anyone keen to suggest a protocol for how to actually do this systematically, or does it sound like a waste of time? Feedback welcome!