ORCID: giving new meaning to “open” research

At the beginning of peer review week, Natasha Simons writes on ORCID – an essential tool throughout academia now.

Contact: Twitter @n_simons

Have you ever tried to search for the works of a particular author and found that there are literally hundreds of authors with the same name? Or found that your name has been misspelt on a publication or that it is plain wrong because you changed your name when you got married (or divorced) a few years back? Well, you are not alone. Did you know that the top 100 surnames in China account for 84.77% of the population or that 70% of Medline names are not unique? So receiving credit where credit is due is badly needed by researchers the world over and in solving this problem, we can also improve the discoverability of their research. But to solve a global problem, we need a global solution. Enter ORCID – the Open Researcher and Contributor Identifier.

orcid_128x128ORCID provides individual researchers and scholars with a persistent unique identifier which links a researcher with their works and professional activities – ensuring the work is recognised and discoverable. Sure, there are many other researcher identifiers out there but ORCID has the ability to reach across disciplines, research sectors, and national boundaries. ORCID distinguishes an individual researcher in a similar way to how a Digital Object Identifier (DOI) uniquely identifies a scholarly publication. It lasts for a lifetime and remains the same whether you move institutions, countries or (heaven forbid) change disciplines. If you’ve not seen one before, check out the ORCID for Nobel Prize laureate and Australian of the Year Peter C. Doherty.

ORCID works as a solution to name ambiguity because it is:

  • Widely used;
  • Embedded in the workflows of submission systems for publishers, funders and institutions;
  • The product of a global, collaborative effort;
  • Open, non-profit and researcher-driven.

There are over 300 ORCID members (organisations or groups of organisations) from every section of the international research community. Over 1.5 million ORCID identifiers for individual researchers have been issued since its launch in October 2012. In Australia, the key role of ORCID has been recognised in two Joint Statements and – as is the case in many other countries – plans for an ORCID Consortium are well underway.

From its very beginning, ORCID has embraced “open” – it is free for researchers to sign up, open to any interested organisation to join, releases software under an Open Source Software license, and provides a free public API. Institutions who wish to embed ORCID into their workflows are advised to join ORCID and this membership fee (for service) in turn supports ORCID to continue to function as a non-profit entity.

A key activity of ORCID at the moment is completing the metadata round trip. It sure doesn’t sound exciting but it is actually. Really! It works like this: when a researcher submits an article to a publisher, a dataset to a data centre, or a grant to a funder, they include their ORCID iD. When the work is published and the DOI assigned, information about the work is automatically connected to the researcher’s ORCID record. Other systems can query the ORCID registry and draw in that information. This will save researchers a lot of time currently spent updating multiple data systems, and ensures correct credit and discoverability of their research. See? Exciting, huh!

Another great thing ORCID is doing is Peer Review Week (28 September – 2 October), which grew out of informal conversations between ORCID, Sense about Science, ScienceOpen, and Wiley. The week highlights a collaborative effort in finding ways to build trust in peer review by making the process more transparent and giving credit for the peer review activity. ORCID have also been collaborating with Mozilla Science Lab, BioMed Central, Public Library of Science, The Wellcome Trust, and Digital Science, among others, to develop a prototype for assigning badges to individuals based on the contributor role vocabulary developed by Project CRediT earlier this year.

It’s great news that this year and for the first time ever, ORCID are officially joining the Open Access Week celebrations. OA Week runs from October 19-26 and their goal is to sign up 10,000 new ORCID registrants and increase the number of connections between ORCID iDs and organisation iDs. They hope you can help! So go on, why not go sign up for an ORCID iD now? You’ll be helping to ensure your scholarly work is discoverable, correctly attributed to you, and you’ll save time in the bargain.

About the author

Natasha Simons is a Research Data Management Specialist with the Australian National Data Service

Assessing research impact: AOASG submission

Measuring the impact of research has been on and off the government’s agenda for many years now. Originally part of the Research Quality Framework, impact was removed from the 5880919002_b92743b247_mExcellence in Research for Australia during its trial in 2009.

Due to its increasing relevance, measuring impact was trialled again in 2012 by the Australian Technology Network and the final report from this study: “Excellence in innovation: Research impacting our nation’s future – assessing the benefits” was released in November 2012.

Plans to assess impact

The Department of Innovation is currently exploring options for the design and development of an impact assessment program. It intends to pilot this in 2014.

As part of this process, the Department released a Discussion Paper in July 2013– “Assessing the wider benefits arising from university-based research

The Paper seeks the “views of interested parties regarding a future assessment of the benefits arising from university-based research”.

Before research administrators throw their hands up at yet another assessment program, the Discussion Paper does recognise the overwhelming compliance burden on universities and the need to simplify this burden. . The Preamble states that plans include “scaling back and streamlining a number of current data collection and analysis exercises”.

Overall, the Government believes that a research benefit assessment will:

  1. demonstrate the public benefits attributable to university-based research;
  2. identify the successful pathways to benefit;
  3. support the development of a culture and practices within universities that encourage and value research collaboration and engagement; and
  4. further develop the evidence base upon which to facilitate future engagement between the research sector and research users, as well as future policy and strategy.

Submission from AOASG

AOASG prepared a submission in response to the discussion paper proposing that open access should be a measurable for assessing impact and that some reward should be associated with making work freely and openly available.

All submissions will be made available to the public on the Dept of Innovation website. In anticipation, the AOASG submission is copied below.

Response to principles from the paper

NOTE: AOASG chose to only respond to Principles 1, 3 and 5.

Principle 1 – Provide useful information to universities

Principle 1 is to be applauded. It is sensible and practical to marry the types of data required with the types of data the Universities are already producing. This will minimise the burden on Universities in aggregating data and producing reports.

Open access repositories in Australian universities are developed in a finite set of software with common underlying code – OAI-PMH. This allows for aggregation and harvesting across multiple platforms. Such repositories usually maintain statistics about individual works, such as the number of downloads and places where these downloads have originated.

Prior to developing or recommending any specific data for reporting on impact, we suggest that a survey be conducted of university libraries to gather information on the type of data collection methods already in place within open access repositories. This also has the benefit of supporting Principle 2 – Minimise administrative burden.

Principle 3 – Encourage research engagement and collaboration, and research that benefits the nation

Principle 3 notes that this assessment should encourage and assist universities to “better recognise and reward (for example in recruitment and promotion exercises) the contribution of academics to engagement and collaborative activities”. A fundamental component of this assessment is an academic’s involvement in open access and their approach to making research freely available.

Many Australian researchers share their work with the broader community by placing a copy of it in their institutional repository, or in a subject-based repository such as PubMed Central, SSRN, arXiv, or RePEc. The ARC & NHMRC open access policies are likely to encourage more researchers to follow this trend. However, currently there is no aggregated data, and little individual data, on the extent to which Australian researchers are making their own work available. In addition, some researchers also widen the accessibility of research outputs by working as editors, publishers and reviewers for open access journals published out of Australian universities. A definitive list is currently being developed of these journals however this list does not indicate the level and extent of open access activity in the country.  The efforts of academics and researchers to share research openly is currently not measured nor rewarded through any promotion or funding incentives.

Principle 5 – Collect and assess at the institution level, with some granularity by discipline

Principle 5 – is a good suggestion given that some types of research will naturally have a wider impact than others. Impact will also vary over time with some research outputs producing impact after a considerable time and others making immediate significant impact. It is more challenging to articulate the benefit to wider society of research in, say, pure inorganic chemistry than, for example, forestry. When considering the need to granulate the information available, the benefit of using data from open access repositories as suggested above, is the metadata for each record contains information about the author, subjects and clearly the institution.

Response to methodological considerations

What considerations should guide the inclusion of metrics within the assessment?

It has become clear that the established measurement systems such as the Journal Impact Factor (JIF) can be affected by those who seek to manipulate the outcomes. A recent clear example this is occurring is the JIF decision to suppress a larger than usual number of titles this year due to “anomalous citation patterns resulting in a significant distortion of the Journal Impact Factor”. Any reliance on metrics as a measure of quality and/or use of research needs to consider attempts to manipulate new measures as a potential outcome. One way of minimising data manipulation is to use a mix of qualitative and quantitative measures.

What information do universities currently collect that might form the basis for research engagement metrics?

As noted above in section 2 all Australian universities and CSIRO have an open access institutional repository. Such repositories usually collect information on the research that is available, how often it has been downloaded and where the interest has originated.

What metrics are currently available (or could be developed) that would help to reveal other pathways to research benefit?

The act of making a work open access creates a pathway to research benefit. Open access increases the potential impact of the work because it ensures the work can be accessed, applied or built upon by other researchers, industry, practitioners and the public. On this basis, we propose that the act of making research publicly available is a fundamental metric of assessing research benefit. This would also support and endorse the open access policies of the ARC & NHMRC. The metric could be twofold – at the individual researcher level (in terms of promotion) and/or at the institutional level.

While in some cases publisher copyright conditions will prevent work being made available, having an appropriate version of the work deposited in a repository with a ‘Request copy’ button facilitating access could be considered ‘making it available’ for this purpose.

Repositories capture article download information at the level of the individual article. This data could also be used as a metric of a pathway to research benefit. There is a proven link between making work open access and citations. The general collection of download statistics would be only one of several measures that can be aggregated to demonstrate interest in an use of research (see the next point).

In addition to ERA, NSRC, GDS, AusPat and HERDC data, are there other existing data collections that may be of relevance?

Recently there has been a move to develop a series of metrics that assess the value of individual articles rather than placing value on the journal or publication in which the article appeared. These article level metrics offer real time feedback about the way a research article is being used.

One example is the metrics page provided for a published article that lists the number of HTML views, PDF downloads, XML downloads as well as the number of citations and where the article has been shared on social media, such as through Twitter. There are however, many other examples of article level metrics already in existence. Examples include: Altmetric, ImpactStory, Plum Analytics & PLOS ALM. Discussion of research on social media sites indicates a level of impact beyond the confines of the scholarly publication system, with the added benefit of being instantly and easily quantifiable. The timeliness and convenience of these metrics addresses the need for “current information on the prospect of benefits from research” as identified in the Discussion Paper.

What are the challenges of using these data collections to assess research engagement?

It will be necessary to determine which sets of article level metrics are the most appropriate for specific purpose. There may be a need for some aggregation to correlate several sets of metrics about the same item.

Response to ‘other comments’ section

We have two suggestions for additions to Appendix A – “Examples of possible metrics”.

An additional Research engagement mechanism could be “Provision of research outputs in a publicly and freely available outlet”. The Measure could be “The percentage of research that is freely and publicly available within 6 months of publication”, and the Source would be “Institutional repositories, subject based repositories or open access publications”.

Currently one of the research engagement mechanisms listed is “Research engagement via online publications”. The measure suggested is “Unique article views per author” and the source is “Websites such as The Conversation”. We are in full support of this suggestion. The Conversation is an opportunity for researchers to discuss their work in accessible language and the author dashboard for The Conversation provides comprehensive metrics about readership.

However we suggest there are other metrics within the classification of ‘online publications’. Open access repositories can provide metrics on unique article views per author. We therefore suggest an additional source being “Institutional repositories, and other article level metrics”.

Four issues restricting widespread green OA in Australia

Australia is a world leader in many aspects of open access. We have institutional repositories in all universities, funding mandates with the two main funding bodies, statements on or mandates for open access at a large number of institutions and a large research output available in many open access avenues. A summary of centrally supported initiatives in this area is here.

However we can do more. This blog outlines four impediments to the widespread uptake of open access in Australia: a lack of data about what Australian research is available, Copyright transfer agreements, the academic reward system and improved national discovery services. We suggest some solutions for each of these issues.

Issue 1 – Lack of data about what Australian research is available OA.

We collect good data in Australia there is good data about the amount of research being created and published annually. Equally, a considerable amount of Australian research is being made available to the wider community through deposit of research in institutional repositories, subject based repositories (PubMed Central, arXiv , SSRN and the like), through publication in open access journals.

However, this information is not compiled in a way to ascertain:
1. What percentage of current Australian research is available open access.
2. Where Australian research is being made available (institutional or subject-based repositories and open access journals).
3. The disciplinary spread of open access materials – an important indicator of areas needing attention.

Without this information it will be difficult to ascertain the level of impact the ARC and NHMRC policies are having on the availability of open access material from current Australian research. There are three actions that could help inform this area.

Solution 1

First it would be enormously helpful to know the percentage of Australian publications that are available open access.

There have been two definitive studies published on worldwide open access availability. Björk et al’s 2010 study concluded that 21% of research published in 2008 was openly accessible in 2009. Gargouri et al’s 2012 study found 24% of research was openly accessible.

But in these studies the method used to determine which work was available was to search for the items manually across several search platforms. This is clearly very time consuming. A study like this in Australia will require funding.

Solution 2

Second we need an easily accessible summary of the number of full text open access items in institutional repositories across the country. In an attempt to address this, the National Library of Australia aggregates research outputs from all Australian university repositories into Trove, and is working with the sector to improve discoverability and metrics around this collection. One challenge is that some repositories do not specify whether records have an open access full text item attached.

This issue was raised during a poll of repository managers in 2012. The poll found that as at June that year there were about 200,000 open access articles, theses and archive material (which includes images) in Australian university institutional repositories. Currently there is no automated way of obtaining an updated figure.

Solution 3

Third, a compliance monitoring tool needs to be developed to assist the ARC and NHMRC manage their open access policies. Currently all institutional repositories in Australia are implementing a standardised field in their repositories to indicate an item results from funding. But to date there is no indication of how this might be harvested and reported on.

Issue 2 – Copyright transfer agreements

As AOASG has already noted, there is a serious challenge keeping up with copyright agreements as they change. In reality, it is extremely difficult for an individual researcher to remain across all of the nuances of the copyright agreements. There have been studies to demonstrate that doing the copyright checking on behalf of the researcher increases deposits into repositories.

But the broader problem is actually two fold. First researchers often have little understanding of the copyright status of their published work. Many do not read the copyright transfer agreements they sign before publication. In addition, most researchers do not keep a copy of these legal documents. While there is currently some advice for researchers about copyright management, such as this page from the University of Sydney, generally awareness of copyright remains poor amongst the research community.

But before we start wagging our fingers at researchers, let’s consider the second, related issue. The copyright transfer agreements presented to researchers by publishers are often many pages long, written in small font and hard to understand. In addition these agreements are not consistent – they differ between publishers and often titles from the same publisher have different agreements.

Generally publishers ask researchers to assign exclusive copyright to them. But in most cases publishers only need the right of first publication of work, and normally do not need to control how research is used and distributed beyond this. There are options for researchers to negotiate different arrangements with their publishers, but the level of uptake of these in Australia is anecdotally very low.

It is highly unlikely there is any specific action that can force publishers to simplify their copyright transfer agreements. But there are a couple of actions the research community can make to improve the current situation.

Solution 4

It would help to have an Australian version of the SPARC Author Addendum tool which can be attached to copyright transfer agreements. This would need to be supported by a concerted education campaign about what rights researchers have, including training materials.

Solution 5

In addition the many researchers in Australia who work as editors for scholarly journals are in a good position to negotiate these arrangements with their publishers on behalf of their authors. An education campaign aimed at journal editors would assist them in this action.

Issue 3 – The academic reward system

The academic reward system supports the current publishing status quo. Widespread uptake of open access will continue to be a challenge while this is the case. A reliance on a numerical analysis of the number of articles published in journals with high Journal Impact Factors as a proxy for quality assessment is a narrow and limiting system.

There are many issues with the Journal Impact Factor. It also causes challenges for open access is it retains emphasis on a small number of specific journals which are in, the vast majority, subscription based. Yet there is evidence to show that open access & subscription journals of the same age have the same impact, indicating that it is time to look at other methods of assessing quality.

Currently the markers used to assess promotion do not differ much from those used for grant allocation. However, the contribution made by researchers to their academic community reaches far beyond simply their publication output. This includes editing journals and the peer review of papers. As there is currently no quantification of this work, the extent of the problem is unknown, although concerns about work overload have been expressed by the academic community. There are serious implications for the sustainability of scholarly publication in terms of human capital.

Solution 6

We need to move to assessment based on article level metrics rather than the organ of publication. It would be helpful if assessments such as ERA and funding allocation were to embrace new, existing alternative metrics. Examples include: Impact Story, Plum Analytics, PLOS ALM, Altmetrics and Google Scholar.

Solution 7

Institutions could consider recognising the hidden work academics undertake supporting the publication process in their promotion rounds. Recognition of peer review and editing roles as well as those researchers who are also publishing journals by running an open access journal using OJS or the like would add value to these activities and make the scholarly publication system more sustainable.

Issue 4 – Improved national discovery services

This last issue is in some ways, related to the first – knowing more about where the research we are producing is ending up. But it has a broader remit, for example incorporating data as a research outcome. Currently researchers can register their data with Research Data Australia which lists over 87,000 collections from over 23,000 contributing research teams.

We need to move beyond simply collecting research, and start working on ways to link data as research outcomes to reports on research publications.

During 2004 and 2008 the Australian Partnership for Sustainable Repositories (APSR) provided assistance and support for the repository community and developed technical solutions relating to interoperability and other repository issues.

APSR was supported by Systemic Infrastructure Initiative (SII) funding [note original post said NCRIS funding – thanks to David Groenewegen for pointing out the error. Amended 16 August]. When this ended, repository manager support was taken over by CAIRSS, financed in 2009-2010 by remainder money from another SII funded project, Australian Research Repositories Online to the World (ARROW) . The university library community through CAUL continued to support this project in 2011-2012 and the work has now been folded into the responsibility of another CAUL committee.

But the work APSR did developing country-wide technical solutions has not continued. Currently repositories around the country are being developed and maintained in isolation from one another.

Solution 8

An investment in current institutional repositories to increase functionality and interoperability will assist compliance with mandates (both Australian and international) and usability into the future. It will also enable a resolution of the metadata issue for country-wide harvesting by Trove.

Solution 9

We suggest revisiting support for country-wide technical development of solutions to common problems facing repositories throughout Australia. An example of a project that could be undertaken is the Funders and Authors Compliance Tool developed in the UK – SHERPA/FACT. This assists researchers to comply with open access mandates.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group