[Comp-neuro] The Error Bars on Impact

Geoffrey Goodhill g.goodhill at uq.edu.au
Wed Jul 1 02:25:18 CEST 2009


The Editorial appended below may be of interest to readers of this
list. It appeared in Network: Computation in Neural Systems, 20:47-48
(2009).

Thanks,

Geoff

-------

THE ERROR BARS ON IMPACT

Geoffrey J Goodhill
The University of Queensland

The impact of a scientist is increasingly judged by quantitative
metrics. Three of the most important are number of papers, number of
citations, and h-factors. While much has been written on how useful
such metrics actually are, far less appreciated is the fact that
their measurement is fundamentally error-prone.

The standard databases on which most people rely to supply these
metrics are neither complete nor consistent. The policy of both
Medline and Thomson Reuters (who own Web of Science) is that they will
generally not index journal issues prior to when the journal was first
received by them, which may be many years after the journal commenced
publication. This means that no papers in Network volumes 1-8 are
listed in Pubmed, and no papers in volume 1 are listed in the Web of
Science. Similarly volumes 1-5 of Neural Computation are not listed in
Pubmed, and volumes 1-3 are not listed in the Web of Science.  Google
Scholar is more inclusive, but still by no means comprehensive.

These databases are thus, by their own choice, incomplete. They do not
always provide a full record of an author's journal publications, nor
does Web of Science always accurately report h-factors. These
databases are of course free to have whatever policies they like: the
problem is when people assume they are complete when evaluating
scientists.

A wider issue is the definitional problem of assigning sharp
boundaries to what are really continuous categories of article
types. Some conferences in some disciplines are far more selective in
what they publish than many journals, yet those papers are often
regarded as in an inferior category to journal papers. On the other
hand, a conference may have an arrangement with a journal to publish
accepted papers, so that these papers end up being categorized as
journal articles even though the conference might have had an
acceptance rate approaching 100%. Similarly a journal might publish a
special issue for which the rigor of review is more similar to that of
a typical book chapter than a typical journal article.

Besides impacting on the number of journal articles a scientist is
deemed to have published this also impacts on their citations, since
usually only citations in journal articles (and not in book chapters
and conference proceedings) are normally counted in h-factor
calculations.  While the Web of Science has now started including some
data from conference proceedings, this of course will not be
backdated.

Crucially, the size of these discrepencies is discipline-dependent.
The issues raised above may be insignificant for many areas of
experimental biology, but they are certainly important for
computational neuroscience. The standard databases will always provide
an underestimate of impact, but the size of the error is likely to be
much larger for a typical computational neuroscientist as compared to
a typical experimental neuroscience.

In summary, besides appreciating that publication and citation metrics
are not the unique measure of a scientist's success, it should be more
widely understood that the actual measurement of these numbers is
plagued by systematic errors.




More information about the Comp-neuro mailing list