The burden on modern-day researchers is huge – not only do they have to plan and carry out their research and teach/train the future generation of researchers but also there is increasing pressure to publish and publicise the outcome of their research while obtaining funding for future studies. Research institutions and funding bodies are becoming ever more demanding in their assessment of research quality, which in turn impacts career progression and research funding. In addition to proving that their work is read and cited by their scientific peers, it is becoming increasingly important for researchers to show the societal impact of their research (public interest yields funding).

The role of the journal publisher is to help authors to promote their work and to ensure maximum visibility and impact. In the past, the peer-reviewed article was used as the basis for reviewing a researcher's performance, and the widely used measure of impact was the number of times an article was cited, collated into the journal impact factor; traditionally, where a paper was published was inextricably linked with the perception of the article's (and, by extrapolation, the researcher's) quality. The limitations of this approach are well documented (Hoppeler, 2014), but citation data certainly still provide a valuable tool, particularly when they are used to focus on an individual article rather than the overall impact factor of the journal.

However, both the scientific and publishing landscape are rapidly evolving, taking advantage of the immense power of the online world and faster, more-effective ways of disseminating research and communicating with the scientific community (e.g. blogs, social media, article sharing tools). Consequently, there are now a variety of additional measures that can be employed to create a more holistic representation of an article's influence.

Use of these alternative ‘article-level metrics’ (or ‘altmetrics’, as they are often called) is still very much in its infancy but, if combined with the more traditional citation-based measures, they provide a richer measure of the impact of the research across both short and long time scales (Table 1). Given the lag between research being published and citations appearing, altmetrics bridges this gap and can help pick up the impact of a paper at an earlier stage.

One of the beauties of online publishing (in comparison with print publishing) is that it is much easier to tell how many people have interacted with an article (or part of an article, e.g. movie, data, video abstract). Although one cannot claim a strict correlation between someone clicking on a link to view/download the full-text/PDF version of an article and them actually reading that article, it gives a good indication of interest. Publishers have become much more transparent regarding article usage, and many journals now provide these data on an article-by-article basis on their journal websites. On The Journal of Experimental Biology (JEB), usage data can be accessed by clicking on the ‘Article Usage Statistics’ link in the ‘Article Metrics’ section within every article; these provide the number of abstract, full-text and PDF downloads of the article each month (please note that statistics are calculated at the beginning of each month for the previous month and therefore it can take up to 1 month after publication for usage data to appear on the website). We also feature a list showing the ‘most read’ and ‘most cited’ articles in the journal each month. Of course, articles need not be viewed in their entirety – for example, JEB publishes occasional video abstracts and supplementary movies on YouTube (http://www.youtube.com/user/CompanyofBiologists), and the usage of these items can be tracked as individual ‘views’, independent of the associated Research Article.

Table 1.

Different ways of measuring article impact

Different ways of measuring article impact
Different ways of measuring article impact

Additional article-level metrics monitor the ‘buzz’ surrounding an article when it is first published, including posts on blogs, article sharing on social networking sites and media coverage. It would be hard, and incredibly time consuming, for an author to monitor such activity every time he/she publishes a new piece of research, so on JEB this information is displayed within each article in the form of an Altmetric (www.altmetric.com) ‘donut’ (Fig. 1).

The number in the centre of the donut is the Altmetric score and is a quantitative measure of the total interest that an article has received; generally, the more people that mention the article, the higher the score, but it is weighted according to the source (e.g. a media article contributes more than a blog post, which contributes more than a tweet), the volume (e.g. if someone tweets more than once about the same paper, it only counts the first tweet) and the author (e.g. a researcher sharing a link will contribute more than a journal account pushing a link out automatically). The colours of the donut represent the range of sources contributing to the score – e.g. red for media articles, yellow for blogs, blue for social media, maroon for Mendeley (www.mendeley.com) readers; and clicking on the donut opens a landing page that drills down to provide more information on the individual sources (Fig. 2). It also puts the score in context with those of other articles published in JEB and the wider literature. (Note: if there have been no online mentions of an article, the donut is not displayed.)

Fig. 1.

The Altmetric donut used by JEB.

Fig. 1.

The Altmetric donut used by JEB.

Fig. 2.

Individual sources contributing to the Altmetric score.

Fig. 2.

Individual sources contributing to the Altmetric score.

Of course, there are a few caveats that followers of article metrics should be aware of. Like any measures, there is always a risk of ‘gaming’, and article-metric providers need to ensure that the correct checks and balances have been put in place to reduce this risk as much as possible. One should also bear in mind that a high Altmetric score indicates that an article is being talked about, but this is not necessarily because it is good science! ‘Who’ is talking about the article and how much their assessment can be valued/trusted is also important; it is easy to confuse opinions (e.g. blogs and social media) with facts, or article downloads with citations. Although there are conflicting views regarding correlations between the various article metrics and citations, some studies have shown that high social media coverage and ‘bookmarking’ on the reference management tool Mendeley can be an early indicator of high citations and scientific ‘reach’ (Schlögel et al., 2014; Shema et al., 2014).

So, what does JEB do to increase the online visibility of its papers? We run journal accounts on both Twitter and Facebook, and links to JEB articles are posted on these sites at the point of publication, allowing members of the community to comment/share/like and redistribute the posts to other social media accounts and blogs. For a selection of articles, JEB also distributes press releases to journalists and provides guidelines to authors on how to deal with any associated media coverage. We also liaise with institutional press offices that wish to highlight a particular piece of research to the media. As a consequence, 148 JEB articles have been highlighted in the media in the past 12 months.

Clearly, there is a great deal that authors can do to help their papers reach a wider audience. Guidelines can be found at www.biologists.org/site/promotingyourpaper.pdf. In addition, taking simple steps such as registering for an ORCID id (www.orcid.org) – which provides a unique identifier that distinguishes you from every other researcher and ensures that your research is attributed to you rather than researchers with a similar name – allows researchers to easily aggregate their research output in one place and assess its impact.

In conclusion, if used in combination with other measures, article metrics have huge potential: for readers to gauge the significance of an article; for researchers (and institutes) to demonstrate research impact and public outreach (for use in benchmarking, personal assessment, grant and tenure applications); for funding bodies to assess public interest in grant proposals; and as an indicator of attitude to science by the public. The challenge, however, is to avoid the misuse that has been seen with previous metrics such as impact factor. It is imperative that all parties are educated regarding the application of these powerful tools, and a consistent approach is taken across the board; independent reviews of the role of metrics in research assessment (e.g. http://www.hefce.ac.uk/whatwedo/rsrch/howfundr/metrics/) and projects investigating best practice and standardisations for the adoption of article metrics (e.g. http://www.niso.org/topics/tl/altmetrics_initiative) are a first step in achieving these aims.

Hoppeler
H. H.
(
2014
).
The intricacies of characterizing a scientific journal's performance
.
J. Exp. Biol.
217
,
3773
-
3774
.
Schlögel
C.
,
Gumpenberger
K. J.
,
Kraker
P.
(
2014
).
A comparison of citations, downloads and readership data for an information systems journal
.
Research Trends
37
,
14
-
18
.
Shema
H.
,
Bar-Ilan
J.
,
Thelwall
M.
(
2014
).
Scholarly blogs are a promising altmetric score
.
Research Trends
37
,
11
-
13
.