Measuring the openness of research

by David M. Nichols & Michael B. Twidale

NOTE:  This blog was the basis of webinar #3 in the AOASG’s 2017 Webinar series.  You can listen here to the webinar which was presented on 20th June, and see the slides here. 

As academics we are measured in many different ways, in particular our research is often characterised through the venues in which we publish and the citations to our works. Roger Burrows observes that when the value of academics is quantified, represented and framed through metrics then our “academic values” are likewise transformed. Stacy Konkiel comments that “most institutions simply measure what can be easily counted, rather than using carefully chosen data to measure their progress towards embodying important scholarly values.”

tapemeasure

Photo: Sean MacEntee CC by 2.0

As researchers wanting to advocate for open access, we decided to explore openness from the perspective of designing a metric. Doing this made us realize that metric design is a socio-technical problem, involving considering what is easy to count, what is important to count—and what to do when these are different. A further consideration is the strange issue that a real-world metric can affect what it tries to measure. If people know you are measuring them them may change what they do. If it is a score and they are competitive they may try to increase that score. Normally this is an annoying problem for social scientists: but as social engineers we want to embrace this feature. We definitely do want to design metrics whose very existence makes people want to change their score by increasing access to information. Fortunately for this aim, we suspect that many academics are rather competitive and even the mere mention of a new metric starts some people thinking about their personal score, that of their peers and what they might do to improve their score.

In order to regard openness itself as a valued quality we need metrics that directly reflect the accessibility of all the diverse aspects of scholarly communication. In Getting our hands dirty: why academics should design metrics and address the lack of transparency Chris Elsden, Sebastian Mellor and Rob Comber argue that academics should “complement critiques of metrics with getting our hands dirty in reflectively and critically designing metrics.” We have attempted to create an alternative list of openness-oriented metrics in our paper Metrics for Openness.

In addition to directly expressing the proportion of works that are open (as ImpactStory now does) we suggest it is important to consider the nature of the online location: is the work on a personal web site or in a managed repository? Explicit metrics around such practical facets of openness can serve to validate and recognise the, often invisible, practical work of making outputs freely available.

A corollary of work behind paywalls is that there is cost for access. We suggest these costs can be personalised in the same manner as an h-index: how much does it cost for someone to access all your work? As with h-indices, such metrics can be directed at different sets of outputs; from individuals to institutions to countries. We hypothesise an avid reader who wishes to access all the non-open outputs of an institution. What would this reader have to pay to read all the 2016 outputs of a university? And how does that cost align with the often lofty vision of the institution to spread knowledge to the world?

The nature of scholarly outputs has changed and it is now widely recognised that supporting information such as data and code are important for interpretation and reproducibility. Consequently, these output types also need openness metrics and we extend our previous work to represent these facets of scholarly communication. Additional interpretations of openness are also amenable to the same approach.

We close by quoting part of the conclusion from the paper:

The simple act of measuring current practice can be a powerful incentive to alter that practice: we suggest authors could start with calculating their own Practical Openness Index. Where that measurement is impeded by a lack of metadata an explicit statement of potential benefits can support moves to enhance metadata provision.

A further benefit to quantifying concepts relating to the openness of published research is to provide a basis for management and policy decision-making. The frequently repeated maxim; that to control something you must first measure it, applies here. We might add that measurement also has a publicity component: one way to raise the profile of an issue is simply to measure it: what gets measured gets noticed. Indeed, it may well be that what gets measured gets to frame the argument. From an open access advocacy perspective, we suggest that it should be just as common for authors to publicise their Openness Indices as it is to publicise their h-index.


As part of the writing of the paper we subjected our own CVs to an openness-centric analysis and we can report that even this simple action creates an incentive to improve. Why not try them on your own works?

Nichols, D.M. and Twidale, M.B. (2016) Metrics for openness. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.23741

Accepted repository version.

Authors

David M. Nichols    Department of Computer Science, University of Waikato, New Zealand

Michael B. Twidale School of Information Sciences, University of Illinois at Urbana-Champaign, USA

No competing interests declared.