ARC & NHMRC OAWk panel discussion

In celebration of Open Access Week, the Australian Open Access Support Group (AOASG) and the Australian National University (ANU) invited the Chief Executive Officers of the two government funding agencies to a panel discussion about their open access policies.

Professor Aidan Byrne, CEO of the Australian Research Council (ARC), and Professor Warwick Anderson, CEO of the National Health & Medical Research Council (NHMRC) spoke about their open access policies, and then participated in a Q&A session that was moderated by ANU Vice Chancellor, Professor Ian Young.

The session was recorded and is available on the ANU You Tube Channel  (see below for time stamps of different parts of the recording). The slides and an audio recording are also available (note the recording goes for the whole event but there were only slides from Professor Anderson’s presentation).

OAWK_Panel

Summary of the discussion

The presentations covered the broader international open access landscape and how much this has changed in the past year. Both Professor Anderson and Professor Byrne discussed how, given the speed of change in scholarly communication, it is almost impossible to know what the open access agenda will look like in five years time. For this reason, neither the NHMRC nor the ARC wish to be prescriptive about how to implement their policies.

The presentations underlined that neither policy advocates a particular method of achieving open access, or specifically requires payment for open access. However, the NHMRC considers the cost of publishing journal articles a legitimate Direct Cost of Research, and the ARC is progressively removing the caps on the percentage of research funds that can be used for publication.

One of the questions that arose was the issue of monitoring compliance to the policies. Both organisations are working on the premise that as researchers make their work open access they will see the benefit of having work available. Professor Anderson noted the NHMRC’s Research Grant Management System now allows Chief Investigators to list publications linked to grants and these will be checked next year. While there are no current plans to withhold future grants from researchers that do not comply with the policies, this could become the case into the future.

More than one researcher noted the challenges with making creative works, or culturally sensitive research freely available. Professor Byrne reiterated that these were examples of why the ARC was not expecting 100% compliance to their policy.

Time points during the recording:

(Note: 2:34 means 2 minutes and 34 seconds into the recording etc)

2:34 – Professor Anderson’s presentation on the NHMRC policy

20:24 – Professor Byrne’s presentation on the ARC policy

28:48 – Question session begins

28.54 – The first question referred to elements of image copyright in particular in the visual arts, given this is an area where people rely on the images for their livelihood

30:49 – The second person asked if there were particular things we should be doing in Australia to comply with the policies and whether we should be positioning ourselves in terms of the international context?

34:36 – This question referred to issues of monitoring compliance, and asked about the tagging proposal from CAUL for harvesting articles and where that proposal is going

40:00 – There was a statement about Australia being a leader in open access monographs

40:26 – A technical question followed about grant applications and asked how compliant researchers had to be in their applications

43:21 – This was a discussion about the dissemination of culturally sensitive research materials

46:36 – The question related to data, and noted that the policies have been shaped and informed by changed expectations of an open society but how have they been shaped and formed by processes in government to make data more open for the taxpayers?

49:01 – The question referred to the cost associated with publication, in particular when groups are disadvantaged because they do not have the resources to come up with the page charges to publish

57:40 – The final question asked about where the country is going in terms of major infrastructure for research

Assessing research impact: AOASG submission

Measuring the impact of research has been on and off the government’s agenda for many years now. Originally part of the Research Quality Framework, impact was removed from the 5880919002_b92743b247_mExcellence in Research for Australia during its trial in 2009.

Due to its increasing relevance, measuring impact was trialled again in 2012 by the Australian Technology Network and the final report from this study: “Excellence in innovation: Research impacting our nation’s future – assessing the benefits” was released in November 2012.

Plans to assess impact

The Department of Innovation is currently exploring options for the design and development of an impact assessment program. It intends to pilot this in 2014.

As part of this process, the Department released a Discussion Paper in July 2013– “Assessing the wider benefits arising from university-based research

The Paper seeks the “views of interested parties regarding a future assessment of the benefits arising from university-based research”.

Before research administrators throw their hands up at yet another assessment program, the Discussion Paper does recognise the overwhelming compliance burden on universities and the need to simplify this burden. . The Preamble states that plans include “scaling back and streamlining a number of current data collection and analysis exercises”.

Overall, the Government believes that a research benefit assessment will:

  1. demonstrate the public benefits attributable to university-based research;
  2. identify the successful pathways to benefit;
  3. support the development of a culture and practices within universities that encourage and value research collaboration and engagement; and
  4. further develop the evidence base upon which to facilitate future engagement between the research sector and research users, as well as future policy and strategy.

Submission from AOASG

AOASG prepared a submission in response to the discussion paper proposing that open access should be a measurable for assessing impact and that some reward should be associated with making work freely and openly available.

All submissions will be made available to the public on the Dept of Innovation website. In anticipation, the AOASG submission is copied below.

Response to principles from the paper

NOTE: AOASG chose to only respond to Principles 1, 3 and 5.

Principle 1 – Provide useful information to universities

Principle 1 is to be applauded. It is sensible and practical to marry the types of data required with the types of data the Universities are already producing. This will minimise the burden on Universities in aggregating data and producing reports.

Open access repositories in Australian universities are developed in a finite set of software with common underlying code – OAI-PMH. This allows for aggregation and harvesting across multiple platforms. Such repositories usually maintain statistics about individual works, such as the number of downloads and places where these downloads have originated.

Prior to developing or recommending any specific data for reporting on impact, we suggest that a survey be conducted of university libraries to gather information on the type of data collection methods already in place within open access repositories. This also has the benefit of supporting Principle 2 – Minimise administrative burden.

Principle 3 – Encourage research engagement and collaboration, and research that benefits the nation

Principle 3 notes that this assessment should encourage and assist universities to “better recognise and reward (for example in recruitment and promotion exercises) the contribution of academics to engagement and collaborative activities”. A fundamental component of this assessment is an academic’s involvement in open access and their approach to making research freely available.

Many Australian researchers share their work with the broader community by placing a copy of it in their institutional repository, or in a subject-based repository such as PubMed Central, SSRN, arXiv, or RePEc. The ARC & NHMRC open access policies are likely to encourage more researchers to follow this trend. However, currently there is no aggregated data, and little individual data, on the extent to which Australian researchers are making their own work available. In addition, some researchers also widen the accessibility of research outputs by working as editors, publishers and reviewers for open access journals published out of Australian universities. A definitive list is currently being developed of these journals however this list does not indicate the level and extent of open access activity in the country.  The efforts of academics and researchers to share research openly is currently not measured nor rewarded through any promotion or funding incentives.

Principle 5 – Collect and assess at the institution level, with some granularity by discipline

Principle 5 – is a good suggestion given that some types of research will naturally have a wider impact than others. Impact will also vary over time with some research outputs producing impact after a considerable time and others making immediate significant impact. It is more challenging to articulate the benefit to wider society of research in, say, pure inorganic chemistry than, for example, forestry. When considering the need to granulate the information available, the benefit of using data from open access repositories as suggested above, is the metadata for each record contains information about the author, subjects and clearly the institution.

Response to methodological considerations

What considerations should guide the inclusion of metrics within the assessment?

It has become clear that the established measurement systems such as the Journal Impact Factor (JIF) can be affected by those who seek to manipulate the outcomes. A recent clear example this is occurring is the JIF decision to suppress a larger than usual number of titles this year due to “anomalous citation patterns resulting in a significant distortion of the Journal Impact Factor”. Any reliance on metrics as a measure of quality and/or use of research needs to consider attempts to manipulate new measures as a potential outcome. One way of minimising data manipulation is to use a mix of qualitative and quantitative measures.

What information do universities currently collect that might form the basis for research engagement metrics?

As noted above in section 2 all Australian universities and CSIRO have an open access institutional repository. Such repositories usually collect information on the research that is available, how often it has been downloaded and where the interest has originated.

What metrics are currently available (or could be developed) that would help to reveal other pathways to research benefit?

The act of making a work open access creates a pathway to research benefit. Open access increases the potential impact of the work because it ensures the work can be accessed, applied or built upon by other researchers, industry, practitioners and the public. On this basis, we propose that the act of making research publicly available is a fundamental metric of assessing research benefit. This would also support and endorse the open access policies of the ARC & NHMRC. The metric could be twofold – at the individual researcher level (in terms of promotion) and/or at the institutional level.

While in some cases publisher copyright conditions will prevent work being made available, having an appropriate version of the work deposited in a repository with a ‘Request copy’ button facilitating access could be considered ‘making it available’ for this purpose.

Repositories capture article download information at the level of the individual article. This data could also be used as a metric of a pathway to research benefit. There is a proven link between making work open access and citations. The general collection of download statistics would be only one of several measures that can be aggregated to demonstrate interest in an use of research (see the next point).

In addition to ERA, NSRC, GDS, AusPat and HERDC data, are there other existing data collections that may be of relevance?

Recently there has been a move to develop a series of metrics that assess the value of individual articles rather than placing value on the journal or publication in which the article appeared. These article level metrics offer real time feedback about the way a research article is being used.

One example is the metrics page provided for a published article that lists the number of HTML views, PDF downloads, XML downloads as well as the number of citations and where the article has been shared on social media, such as through Twitter. There are however, many other examples of article level metrics already in existence. Examples include: Altmetric, ImpactStory, Plum Analytics & PLOS ALM. Discussion of research on social media sites indicates a level of impact beyond the confines of the scholarly publication system, with the added benefit of being instantly and easily quantifiable. The timeliness and convenience of these metrics addresses the need for “current information on the prospect of benefits from research” as identified in the Discussion Paper.

What are the challenges of using these data collections to assess research engagement?

It will be necessary to determine which sets of article level metrics are the most appropriate for specific purpose. There may be a need for some aggregation to correlate several sets of metrics about the same item.

Response to ‘other comments’ section

We have two suggestions for additions to Appendix A – “Examples of possible metrics”.

An additional Research engagement mechanism could be “Provision of research outputs in a publicly and freely available outlet”. The Measure could be “The percentage of research that is freely and publicly available within 6 months of publication”, and the Source would be “Institutional repositories, subject based repositories or open access publications”.

Currently one of the research engagement mechanisms listed is “Research engagement via online publications”. The measure suggested is “Unique article views per author” and the source is “Websites such as The Conversation”. We are in full support of this suggestion. The Conversation is an opportunity for researchers to discuss their work in accessible language and the author dashboard for The Conversation provides comprehensive metrics about readership.

However we suggest there are other metrics within the classification of ‘online publications’. Open access repositories can provide metrics on unique article views per author. We therefore suggest an additional source being “Institutional repositories, and other article level metrics”.

Altmetrics and open access – a measure of public interest

Researchers, research managers and publishers are increasingly required to factor into their policies and practices the conditions by which publicly funded research must be made publicly available. But in the struggle for competitive funding, how can researchers provide tangible evidence that their outputs have not only been made publicly available, but that the public is using them? Or how can they demonstrate that their research outputs have reached and influenced those whose tax dollars have helped fund the research?

Traditional impact metrics

The number of raw citations per paper or an aggregate number, such as the h-index, are indicators of scholarly impact, in that they reveal the attribution of credit in scholarly works to prior scholarship. This attribution is normally given by scholars in peer-reviewed journals, and harvested by citation databases. But they do not provide an indication of public reach and influence. Traditional metrics also do not provide an indication of impact for non-traditional research outputs, such as datasets or creative productions, or of non-journal publications, such as books and media coverage.

Public impact for all types of research outputs could always be communicated as narrative or case studies. These forms of evidence can be extremely useful, perhaps even necessary, in building a case of past impact as an argument for future funding. However, impact narratives and case studies require sources of evidence to support their impact claims. An example of how this can be achieved is in the guidelines for completion of case studies in the recent Australian Technology Network  of universities (ATN)  / Group of Eight (Go8) Excellence in Innovation in Australia impact assessment trial.

One promising source of evidence is the new suite of alternative metrics or altmetrics that have been developed to gauge the academic and public impact of digital scholarship, that is, any scholarly output that has a digital identifier or online location and that is accessible by the web-public.

The advent of altmetrics

Altmetrics (or alternative metrics) was a term aptly coined in a tweet by Jason Priem (co-founder of ImpactStory). Altmetrics measure the number of times a research output gets cited, tweeted about, liked, shared, bookmarked, viewed, downloaded, mentioned, favourited, reviewed, or discussed. It harvests these numbers from a wide variety of open source web services that count such instances, including open access journal platforms, scholarly citation databases, web-based research sharing services, and social media.

The numbers are harvested almost in real time, providing researchers with fast evidence that their research has made an impact or generated a conversation in the public forum. Altmetrics are quantitative indicators of public reach and influence.

The monitoring of one’s impact on the social web is not an exercise in narcissism. Altmetrics enable the creation of data-driven stories for funding providers and administrators. Being web-native, they also facilitate the fleshing out of those stories, by providing links to the sources of the metrics. Researchers can see who it is talking about their research, what they are saying about it, and even how they intend to use it for various scholarly, industry, policy and public purposes. In this way, researchers can find potential collaborators and partners, and gain constructive feedback from those interacting with the research.

Altmetrics also provide a democratic process of public review, in which outputs are analysed and assessed by as many students, researchers, policy makers, industry representatives, and members of the public that wish to participate in the discussion. Altmetrics provide a more comprehensive understanding of impact across sectors, including public impact by publically funded research.

Altmetrics and open access

There is an interesting relationship between altmetrics and open access. One could even refer to altmetrics as open metrics. This is firstly due to the fact that altmetrics data uses open sources. Altmetrics services access and aggregate the impact of a research artefact, normally via an application programming interface (API) made available by the source. Altmetrics services in turn provide APIs for embedding altmetrics into institutional repositories or third-party systems. Secondly, open access research outputs that are themselves promoted via social web applications enjoy higher visibility and accessibility than those published within the commercial scholarly communication model, increasing the prospect of public consumption and engagement.

Altmetrics (also known as article level metrics or ALMs) are seen as complementary to open access. The PLOS Article Level Metrics for Researchers page lists some of these complementarities:

  • Researchers can view and collect real-time indicators of the reach and influence of outputs, and share that data with collaborators, administrators and funders
  • Altmetrics empowers researchers to discover impact-weighted trends and innovations
  • Researchers can discover potential collaborators based on the level of interest in their work
  • High impact datasets, methods, results and alternative interpretations are discoverable
  • Dissemination strategies and outlets can be tracked, evaluated and reported on
  • Evaluation of research is based on the content, as opposed to the container (or journal)
  • Research recommendations are based on collective intelligence indicators

The April/May issue of the ASSIS&T Bulletin contains a special section on altmetrics, in which several articles touch on the complementarity between altmetrics and open access.  These articles show altmetrics:

  • Provide open source social impact indicators that can be embedded into CVs
  • Enable a public filtering system and track social conversations around research
  • Provide evidence of access by countries that cannot afford expensive journals
  • Provide authors with a more comprehensive understanding of their readership
  • Offer repository managers additional metrics for demonstrating the impact of open access
  • Provide additional usage data for collection development and resource planning exercises
  • Provide supplementary impact indicators for internal reviews and funding applications
  • May be used as quantitative evidence of public impact for research evaluation exercises
  • Provide a better reflection of the usage and impact of web-native outputs

The last point is particularly salient. The new web-based scholarly communication model is one of sharing findings as they occur, interaction and evaluation by interested parties, and subsequent conversations leading to future collaborations and revised or new findings. And altmetrics provide us with an understanding of the impact received at each point in the cycle.

Providers of altmetrics

The following services are good places to start to monitor your altmetrics:

Altmetric and ImpactStory both offer free widgets that can be embedded into repositories, and ImpactStory has the further advantage that impact “badges” can be embedded into CVs. Altmetric also offers a free bookmarklet that can be added to your bookmarks and used to get altmetrics on articles with Digital Object Identifiers (DOI)s or identifiers in open databases such as PubMed Central or arXiv. Altmetrics will only work on Chrome, Firefox or Safari. Plum Analytics probably has the widest coverage of altmetrics sources, and is a paid service. Both Altmetric and Plum Analytics offer commercial tools that offer comparative and group reports.

The best way to engage with altmetrics is to jump right in and have a play. You will be amazed at how quick and easy it is to use the tools and start generating metrics for your research outputs.

Repository administrators can embed altmetrics at the article level within institutional repositories to compliment traditional metrics, views and downloads. Some research information management systems, such as Symplectic Elements, that are capable of generating reports on publication activity and impact, also include article level altmetrics alongside traditional citation metrics.

Pat Loria is the Research Librarian at the University of Southern Queensland.
His twitter handle is
@pat_loria.