Science is a collaborative kind of thing. People find things out, write it and disseminate it to tell your colleagues and the world at large what you’ve done – and read up what everyone else is doing too, to guide your way.

Most people are familiar with the concept of a scientific journal – the practising scientist makes an observation, writes a report and publishes it in a funny-looking journal. The problem, at least according to Randy Schekman, is that the race to the top, to be published in the most prestigious journals, distorts the scientific process and favours eye-catching research over solid research.

In an article for the Guardian, Schekman, Nobel laureate and Professor of Cell Biology at University of California Berkeley, says he will be boycotting top-ranked scientific journals, such as Nature and Science, presumably favouring less prestigious but more balanced publications. This highlights a looming problem the scientific community currently faces: publication in such journals is highly desirable but often provide an inaccurate representation of the field. Top journals commonly favour either dazzling results or controversial ideas, picking a very select subsection of all scientific material published, biased towards the headline grabbing. As such, scientists are encouraged to work on fashionable or contentious topics, often disfavouring fields that are not seen as ‘up and coming’.

I would like to argue the problem is not the race to the top – it’s hiring the best climbers.

While there is a selection bias towards exciting and cutting edge research, that is human nature – we are attracted to it and scientists have the freedom to choose what topics to study. The intrinsic idea of a ranking for scientific journals is also not surprising; it is useful to know if an article is a deal breaker, or more of a run of the mill finding, based on where it appears in the literature.


What gets to the core of the issue is publication bias being sustained by the academic assessment system. Publication in prestigious journals is seen as necessary for success in academia – the more of these you accumulate, the better your chances of landing a job or securing a grant. For institutions, it forms the core of their assessment, with laboratories, departments and universities ranked based on their ability to continuously produce high impact publications and directly affecting their ability to secure essential funding to continue operating.


So while Prof Schekman has made a valuable stance to highlight the deficits in the current system, it does little to propel change. Change would come from the places where value is given to such things, at the meeting rooms of grant-awarding bodies, populated by the likes of Prof Schekman. Indeed, he would be better of saying “I will judge applicants based on the quality of their work and not the impact factor of their citation list” rather than “I won’t publish in Science”.

Science has always been about understanding the world around us, so let us reward those who help us see the vistas, and not those who race to the summit.

—————————————————————————————————

Original article published in The Guardian, as well as more journalistic coverage here and here. For more information on academic rankings, see Wikipedia articles on Impact Factor and H-index.

Advertisements