Follow the money: tracking Article Processing Charges at the University of Canterbury

Anton Angelo writes on how hard it can be to figure out who is paying what in Article Processing Charges

A thousand dollars, a hundred thousand or a million?  Peter Lund, the UC Research Support Manager and I asked ourselves that question last year as we tried to work out how much the University of Canterbury pays in Article Processing Charges (APCs).  We wanted to know how much we paid for articles to be published as Open Access, and it was turning out to be surprisingly hard to find out.  It was even difficult to ascertain what order of magnitude APCs were in.

Our first attempts – in an All the President’s Men ‘follow the money’ approach – were stymied. We talked to our finance department, but university financial systems were not granular enough to see what was being put into publisher’s hands out of research grants.  We were not even sure if research grants were the source of funds in the first place – any budget could conceivably be paying APCs.

A phase of heroic data-wrangling came next.  I grabbed the output of our Current Research Information System, “Profiler”, for the last few years, and popped it into MS Access.  That held all the details for articles submitted for the NZ research funding exercise Performance Based Funding Review (the PBRF), including the journal title for each research output.  Another table, sourced from the Directory of Open Access Journals included the title, and if the journal accepted APCs.

A bit of Structured Query Language later and I had a list of all the articles by Canterbury researchers for which APCs could have been paid for by one of the authors.

Then, came the figures. I looked up the APC rates of the 10 top journals Canterbury scholars published in, multiplied them up and got an answer:  tens to hundreds of thousands of NZ dollars.  This back of an envelope method didn’t give us actionable figures, but it did sharpen our minds.  Canterbury is still suffering the effects of a major natural disaster, as well as the twin prongs of fiscal austerity and a demographic shift leading to fewer undergraduate age students.  In short, we’re strapped for cash.

Our first question, knowing the magnitude of the sum, led, of course to more questions.  Could we refine that figure further?  We decided that we needed harder figures.   From our first investigation we now had a list of Canterbury researchers that might have paid APCs to enable their research to become Open Access.  Problems with the data were that fees could have been waived, or co-authors at other institutions might have paid them (a good reason to collaborate with someone in the UK, and their block grants) so we decided to do the hard thing, and go out and ask them.

Like all academic librarians, we are leery of putting extra workload on researchers and teachers.  With all the traditional roles, they are suffering Herculean amounts of extra administration – reports, copyright reviews, applications for research funding, and these tasks are increasing regularly.  We spent time with questionnaire designers creating something that would give us the most wisdom for the least input.  The result was a 50% response rate of a population of 100 researchers we knew had published in OA journals charging APCs.  Our results, published in our repository the University of Canterbury Research Repository [1] and the data in figshare [2] had some startling implications.

  • We were correct that the magnitude of APCs was in the hundreds of thousands of dollars.
  • The source of funds for APCs was varied, including in some cases the researcher’s personal funds.
  • Researchers were paying APCs to support Open Access, but more importantly because they believed that Open Access journals were the best places to publish that specific research.
  • Researchers expected to pay more APCs in the future.

So, this confirmed that there was a problem: funding was required to pay for APCs. The next question was how to fund these fees.  Our approach was to suggest a central fund for those who may not be able to draw on other sources, and the story of how that has developed, dear readers, is in the next episode.

 Anton Angelo is Research Data Co-ordinator, University of Canterbury.

[1] Angelo, A., & Lund, P. (2014). An evolving business model for scholarly publishing: exploring the payment of article processing charges (APCs) to achieve open access. Retrieved from http://ir.canterbury.ac.nz/handle/10092/9730

[1]      Angelo, A., & Lund, P. (2014, September 2). Raw dataset for University of Canterbury APC study. Retrieved from http://figshare.com/articles/Raw_dataset_for_University_of_Canterbury_APC_study/1157870

ORCID: giving new meaning to “open” research

At the beginning of peer review week, Natasha Simons writes on ORCID – an essential tool throughout academia now.

Contact: Twitter @n_simons

Have you ever tried to search for the works of a particular author and found that there are literally hundreds of authors with the same name? Or found that your name has been misspelt on a publication or that it is plain wrong because you changed your name when you got married (or divorced) a few years back? Well, you are not alone. Did you know that the top 100 surnames in China account for 84.77% of the population or that 70% of Medline names are not unique? So receiving credit where credit is due is badly needed by researchers the world over and in solving this problem, we can also improve the discoverability of their research. But to solve a global problem, we need a global solution. Enter ORCID – the Open Researcher and Contributor Identifier.

orcid_128x128ORCID provides individual researchers and scholars with a persistent unique identifier which links a researcher with their works and professional activities – ensuring the work is recognised and discoverable. Sure, there are many other researcher identifiers out there but ORCID has the ability to reach across disciplines, research sectors, and national boundaries. ORCID distinguishes an individual researcher in a similar way to how a Digital Object Identifier (DOI) uniquely identifies a scholarly publication. It lasts for a lifetime and remains the same whether you move institutions, countries or (heaven forbid) change disciplines. If you’ve not seen one before, check out the ORCID for Nobel Prize laureate and Australian of the Year Peter C. Doherty.

ORCID works as a solution to name ambiguity because it is:

  • Widely used;
  • Embedded in the workflows of submission systems for publishers, funders and institutions;
  • The product of a global, collaborative effort;
  • Open, non-profit and researcher-driven.

There are over 300 ORCID members (organisations or groups of organisations) from every section of the international research community. Over 1.5 million ORCID identifiers for individual researchers have been issued since its launch in October 2012. In Australia, the key role of ORCID has been recognised in two Joint Statements and – as is the case in many other countries – plans for an ORCID Consortium are well underway.

From its very beginning, ORCID has embraced “open” – it is free for researchers to sign up, open to any interested organisation to join, releases software under an Open Source Software license, and provides a free public API. Institutions who wish to embed ORCID into their workflows are advised to join ORCID and this membership fee (for service) in turn supports ORCID to continue to function as a non-profit entity.

A key activity of ORCID at the moment is completing the metadata round trip. It sure doesn’t sound exciting but it is actually. Really! It works like this: when a researcher submits an article to a publisher, a dataset to a data centre, or a grant to a funder, they include their ORCID iD. When the work is published and the DOI assigned, information about the work is automatically connected to the researcher’s ORCID record. Other systems can query the ORCID registry and draw in that information. This will save researchers a lot of time currently spent updating multiple data systems, and ensures correct credit and discoverability of their research. See? Exciting, huh!

Another great thing ORCID is doing is Peer Review Week (28 September – 2 October), which grew out of informal conversations between ORCID, Sense about Science, ScienceOpen, and Wiley. The week highlights a collaborative effort in finding ways to build trust in peer review by making the process more transparent and giving credit for the peer review activity. ORCID have also been collaborating with Mozilla Science Lab, BioMed Central, Public Library of Science, The Wellcome Trust, and Digital Science, among others, to develop a prototype for assigning badges to individuals based on the contributor role vocabulary developed by Project CRediT earlier this year.

It’s great news that this year and for the first time ever, ORCID are officially joining the Open Access Week celebrations. OA Week runs from October 19-26 and their goal is to sign up 10,000 new ORCID registrants and increase the number of connections between ORCID iDs and organisation iDs. They hope you can help! So go on, why not go sign up for an ORCID iD now? You’ll be helping to ensure your scholarly work is discoverable, correctly attributed to you, and you’ll save time in the bargain.

About the author

Natasha Simons is a Research Data Management Specialist with the Australian National Data Service

Open data is good, because …

Belinda Weaver writes on the many benefits of open data

Contact: Twitter @cloudaus

Why should we advocate for open data? What benefits does it bring?

Transparency, for one.

Governments do stuff. We don’t always like it. It helps if we have the data to back up our objections. The Guardian datablog publishes and visualises a lot of information which helps pierce the opacity around government. Two examples – the costs of the post-GFC UK bank bailout and where UK spending actually goes. Both do a great job of communicating a message, and the data can be downloaded and reused.

The ABC’s FactCheck service is one way Australians can check on what governments are saying.

Open data makes things more efficient.

Disasters happen. In the helping phase, open data helps relief agencies get the information they need to direct operations on the ground. It helps governments get the plans and details of the infrastructure they need to fix. The New Zealand response to the Christchurch earthquake is a case in point. Crisis.net is a global source of information to help make disaster response quicker and more efficient.

Open data helps join things up.

Cities are complex beasts and making things work in synch requires a lot of planning and coordination. Plenar.io provides a platform for all kinds of data – transport, air quality – to be stored, interrogated and overlaid. Chicago and San Francisco in the US and Glasgow and Newcastle in the UK have all implemented Plenario for cities data. Data exists on a single map and a single timeline, making it easy to access multiple datasets at once, even those originally housed at different data portals.

Open data democratises access.

Codex Sinaiaticus, the Christian Bible in Greek, was handwritten more than 1,600 years ago, and is the oldest substantial book to survive antiquity. The manuscript contains the oldest complete copy of the New Testament.  Its text is of outstanding importance for the history of the Bible, and the manuscript is of supreme importance for the history of the book.  The MS is in four locations – London, St Petersburg, Sinai and Leipzig. Now the item has been fully digitised, scholars from anywhere can work on it. What was once accessible only to a privileged few is now open to all.

Open data enables new businesses.

The Go Brisbane app allows users to save favourite journeys and view timetables for them very quickly. This beats using official transport websites where getting the same information takes a whole lot longer. Open mapping information has created a range of new businesses – travel, holidays, restaurant guides, walking tours, direction finders … the possibilities are endless.

Open data saves lives.

‘Dr Internet’ is blamed for many false diagnoses, but it can also foster real ones. As more and more medical information becomes freely available, patients can investigate their problems and possibly find some answers, as this story shows

Have you got a good open data story? Share it here.

About the author

Belinda Weaver is eResearch Analyst Team Leader, Queensland Cyber Infrastructure Foundation.

Going beyond the published article: how Open Access is just a start

Alex Holcombe agues that if academics learn how to code and post their code, replication can become routine instead of a heroic painstaking effort.

The post is especially timely following the publication last week of a paper in Science documenting the difficulty in replicating published psychology studies

Contact: alex.holcombe@sydney.edu.au or Twitter @ceptional

A published article is only a summary description of the work that was done and also only a summary of the data that was collected. Making the published article open access is important, but is only a start towards opening up the research described by the article.

Openness is fundamental to science, because scientific results should be verifiable. For each result, at least one of two possible verification approaches ought to be made viable. One is scrutinizing every step of the research process that yielded the new finding, looking for errors or misleading conclusions. The second approach is to repeat the research.

The two approaches are linked, in that both require full disclosure of the research process. The research can only be judged error-free if every step of the research is available to be scrutinized. This information is also what’s needed to repeat the research. I will use the term reproducible research for this high standard of full publication that we should be aspiring to.

Figure 1 Via the Center for Open Science, a community of researchers (including myself) have developed badges to indicate that the components of a scientific report needed for reproducibility are available. The badges can then be awarded to qualified papers by journals.

Figure 1 Via the Center for Open Science, a community of researchers (including myself) have developed badges to indicate that the components of a scientific report needed for reproducibility are available. The badges can then be awarded to qualified papers by journals.

Unfortunately, the explicitness that would be required for exact reproduction is far higher than the norm of what is typically published. This is true even for the most influential studies. I come across this problem regularly in my role as “Registered Replication Report” co-editor for the journal Perspectives on Psychological Science. As editor, I supervise the replication of important experiments, and this often requires extensive work in re-developing experiment software and analysis code on the basis of the summaries typical in journal articles.

In experimental psychology, the main steps of doing an experiment are presenting the stimuli to the participant, collecting the responses, doing statistical analysis of the responses, and creating the plots. If one were to write out every step involved in these, it would take a very long time.

It would certainly take more time than most academics have. Academics today are under tremendous pressure to generate new findings for their next paper, and under little or no pressure to document their processes meticulously.

Funder mandates and incentives have been pushing researchers towards making their science more reproducible. This, accompanied by cultural change at the grassroots and at the level of journal editors, is making substantial headway. Efforts at each of these levels reinforce each other in a virtuous circle.

open science wins

Figure 2 A virtuous circle of action at multiple levels is needed to achieve full reproducibility. “Open science” is a closely related concept

A virtuous circle of action at multiple levels is needed to achieve full reproducibility. “Open science” is a closely related concept.

One facet of the virtuous circle that often goes unappreciated is automation. Automation requires advances in technology. In science, these advances are often achieved by researchers and programmers contributing open source code. Automation has many benefits, allowing it to progress independently of reproducibility incentives and culture.

Automation has of course transformed many industries previously, from the making of telephone calls (no more switchboard operators) to the making of cars (robots do much of the assembly).

Figure 3 An early printing press. While the actual printing is here done by machine, humans must guide the machine through hundreds of steps. Unfortunately, this is reminiscent of how much of laboratory science is done today.

Figure 3 An early printing press. While the actual printing is here done by machine, humans must guide the machine through hundreds of steps. Unfortunately, this is reminiscent of how much of laboratory science is done today.

But rather than being a large-scale production system, science is more like a craft. In experimental psychology, each laboratory is doing their own little study, and often doing experiments significantly different than the same lab did a year ago. From one project to the next, the steps can change. If a researcher is doing an experiment or data analysis procedure that they may never have to repeat, there is little incentive to automate it.

It is almost always true, however, that aspects of one’s data analysis will need to be repeated. I refer not only to the need to repeat the analysis for future projects, but rather what one must due to satisfy reviewers. After submission of one’s manuscript to a journal, it tends to come back with peer reviewer complaints about the amount of data (not enough) or the way the data was analysed (not quite right).

This is where I was first truly pleased by having automated my processes – when to satisfy the reviewers, I only needed to change a few lines in my analysis code. Following that, simply clicking “run” in my analysis program resulted in all the relevant numbers and statistics being calculated and all the plots of results being re-done and saved in a tidy folder.

Unfortunately, most researchers never learn the skills needed to automate their data analysis or any other process, at least in psychology. Usually it involves programming. For data analysis, the best language to learn is R or Python.

R has gradually become easier and easier to use, but for those without much programming experience, an intensive effort is still required to become comfortable with it. Python is more elegant, but doesn’t have as much easily-usable statistical code available.

I have begun organising R programming classes for postgraduates at the University of Sydney- here is a description of the first one. I have two main reasons for doing this. Foremost is to empower students, both with the ability to automate their data analyses and with programming skills – useful for a range of endeavours. Second is to make research reproducible, which can only happen if the new generation of scientists are able to automate their processes.

A fantastic organisation of volunteers called Software Carpentry teaches researchers to program. Two junior researchers at the University of Sydney this year completed the Software Carpentry instructor training program – Fabian Held and Westa Domenova. With Fabian and Westa as the instructors, a 2-day Software Carpentry bootcamp is being planned for February 2016, as part of the ResBaz  multi-site research training conference.

Unfortunately, formal postgraduate classes have been sorely lacking at nearly every university I have been associated with.  And at the University of Sydney as well, we don’t have the financial resources to set up a class with a properly paid instructor. Fortunately, Software Carpentry provides a fantastic low-cost, volunteer-based way to disseminate programming skills. While it would be hard to find volunteer instructors for most types of classes, something about programming seems to brings out the best in people – just have a look at the amazing range of software resources created by the open-source community.

I like to think I am helping create a future where reproducing the research in published psychology articles will not require extensive software development or many manual steps that must be reverse-engineered from a few paragraphs of description in the article. For the data analysis component of projects, if not the actual experiments, one ought to be able to download and run code that is published along with the paper. Aside from being good for the world, that would make my job editing Registered Replication Reports a lot easier.

About the Author 

Alex Holcombe is Associate Professor of Psychology and Co-director, Centre for Time, University of Sydney and  Associate Editor, Perspectives on Psychological Science

In an open world, what value do publishers add to research?

Jack Nunn, a consultant in public involvement in research, who works as a researcher in the Public Health Department at La Trobe University, reflects on how publishing could be different.

Contact: Jack.Nunn@latrobe.edu.au or Twitter: @jacknunn

cheese-852978_1280Using only camembert, smoked salmon and controlled laboratory conditions, I had a revelation about the relationship between researchers, publishers and the public. This is the story.

I was in one of the world’s leading laboratories being given a tour of a potentially hazardous area, when suddenly the PA barked ‘ATTENTION ALL STAFF, ATTENTION ALL STAFF’. I was ready for the worst, to evacuate or suit up. But why was I there at all?

I’d spoken earlier that day about public involvement in research and publishing at an event at the Walter and Eliza Hall Research Institute.

It was organised and paid for by the open access publisher Biomed Central, in order to raise awareness about their work.

The publisher recently asked if I would volunteer my time to be a member of the editorial board of the new journal ‘Research Involvement and Engagement’ and also speak at their events in Australia. It is a new journal being run on a not-for-profit model and BioMed Central are world-leaders in open access publishing, so it was exciting to accept.

I found myself plunged into the mysterious, intriguing and often self-perpetuating world of publishing.

The speech I made essentially asked the question ‘What value do publishers add to research, and therefore the public good’. This is a different question from how valuable is publishing – to which the answer is ‘very’.

Publishers make lots of money from publishing research, including open access research. In other words, I sought an answer to the question – ‘what are publishers giving back to the research process, in return for the money they take’.

I also asked how the public could be supported to be more involved in every stage of the research cycle, including publishing and dissemination. I ended with my usual plug for Tim Berners Lee’s eye-opening TED talk about open and linked data, which describes how everyone can access and interpret data – the very embodiment of public involvement in research.

In conclusion, I said I think publishers have an important and crucial role in science, and posed a series of questions to reflect on why do publishers exist as they do – much as one may ponder ‘Why do we have a Royal Family?’ in a neutral and balanced way.

After I spoke, I met interesting people around a delicious buffet of cheeses and smoked salmon and then was fortunate enough to be given a tour of the research institute.

Within half an hour I’d met world-leading cancer researchers, people developing potential malaria vaccines and seen other labs full of people working late, missing out on time with friends and family in order to do countless wonderful things in the name of research.

As it was a working lab, naturally there were exciting things like negative pressure rooms and gene-sequencers – but also the reminders you were somewhere potentially dangerous, with ‘biohazard’ signs and emergency eyewash and showers at every corner.

Suddenly the PA system barked out ‘ATTENTION ALL STAFF, ATTENTION ALL STAFF.

They had my attention too.  I was ready to evacuate, or go on a three-day lock-down to hunt for an escaped malaria-carrying mosquito.

The announcement continued:

‘THERE IS LEFT-OVER FOOD UPSTAIRS. Repeat, THERE IS LEFT-OVER FOOD UPSTAIRS ’.

I laughed, half in relief – but on reflection, there was nothing that funny about it. The food was from the BioMed Central event I had spoken at.

Naturally, no one wants food to go to waste – but the funny side wore off when I saw researchers head upstairs to eat leftovers from an event, which like many awareness raising events, is partly funded by open access fees. These are often paid by research institutions (thus indirectly, taxes or charitable donations) to publishers to cover the costs of making it available without a ‘paywall’.

However, many publishers also spend significant amounts of money to attract researchers to publish with them. Naturally it’s more complicated than this, but a simple thought struck me and I daydreamed…

I day-dreamed of a world where researchers doing life-saving work had publishers eating their leftovers, at events hosted by researchers. Events where researchers allowed potential publishers to apply for the privilege of publishing them – and researchers decide who they will allow to publish their important research.

I imagined what would happen if all researchers collectively and suddenly decided they didn’t want to submit to ‘for-profit’ publishers because they felt reputations and impact factors  were suddenly irrelevant in a digital age, thus disrupting any business model based on prestige.

Would less money go to publishers and more stay within research institutions for research? Would a sea of poor quality research drown good research with no one paid to check it, or would publishing just happen faster, like publishing this blog –  the reviewing stage happening afterwards, in the open, in public?

It was a wild day-dream and I blame the cheese.

So, if you ever feel you are not worthy to eat the leftovers and crumbs of others, always ask ‘whose table is it’?

In research, the table is for everyone, and we should all be invited to sit at it as equals.

We just need to figure out who is bringing the cheese and smoked salmon.

About the Author 

Jack has led the development and implementation of an internationally recognised model for building partnerships between the public and researchers. He has worked for Government, leading charities and universities, including the UK’s National Institute for Health Research and Macmillan Cancer Support. He has partnered with the World Health Organisation, the Cochrane Collaboration and community organisations across the UK, Europe, Australia and Asia.

Full disclosure: I receive no money for the time I volunteer with BioMed Central. I did, however, eat more than my fair share of cheese at one of their events. 

This is the first in a series of blogs which we hope will provoke and inform debate on issues in Open Scholarship across Australasia. If you’d like to write for us, please get in touch: eo@aoasg.org.au

What to believe in the new world of open access publishing

Virginia Barbour, Australian National University Executive Officer, AOASG

It’s never been easy for readers to know what to believe in academic research. The entire history of science publishing has been riddled with controversy and debate from its very beginning when Hobbes and Boyle, scientists at the Royal Society in London, argued over the scientific method itself.

Even a cursory glance at academic publishing since then shows articles contradicting each others’ findings, papers subsequently shown to contain half truths (even in the serious matter of clinical trials) and yet more that are simply fabricated. Shaky and controversial results have been a part of science since it began to be documented.

Enter a new apparent villain – “predatory open access” publishing, now claimed by some to be overwhelming the literature with questionable research. As highlighted in the recent documentary on Radio National, and subsequently discussed in The Conversation, there has been a proliferation of dodgy new journals and publishers who call themselves “open access” and who eagerly court academics to be editorial board members, to submit their articles and to attend and speak at conferences.

These activities have led to concern over whether any open access publications can be trusted. Librarians in institutions in Australia and elsewhere attempt to keep abreast of all these “predatory” journals and publishers.

In a more positive endeavour, an organisation of legitimate open access publishers (OASPA) has come together and they and other journal associations and the Directory of Open Access Journals have produced ways to assess journals.

Academic publishing has changed since the advent of the internet.

Although the extent of the problem is not known (and may even be exaggerated by ever-expanding blacklists), some academics still submit to questionable journals, newspapers give publicity to bizarre articles from them, and non-academic readers rightly wonder what on earth is going on.

It’s worth remembering how new this all is. Whereas scholarly publishing is 350 years old, it is only 25 years since the web began; academic online publishing followed about 20 years ago. Open access – a part of the wider open scholarship movement (which seeks to enhance integrity and good scholarship) – is barely 15 years old.

What we are witnessing is the oft-repeated story of what happens when any new technology appears. Alongside an explosion of opportunities for good, there will always be those that seek to exploit, such as these predatory publishers.

But just as no one ever assumed that everything in print was trustworthy, neither should that be the case for open access content. And in the end the content is what matters – whether delivered by open access, subscription publishing, or a printed document.

To complicate matters further, alongside this revolution in access, the academic literature itself is evolving apace with papers being put online before review and revisions of papers made available with peer review histories alongside.

Even the format of the academic paper is changing. Datasets or single figures with little explanation attached to them can now be be published. The concept of an academic paper that is a definitive statement of “truth” is finally being laid to rest.

It was never a realistic concept and arguably has led to much confusion about the nature of truth, especially in science. Science evolves incrementally. Each finding builds on evidence from before, some of which will stand up to scrutiny via replication, and some not.

As the amount of information available increases exponentially, the challenge for everyone is to learn how to filter and assess the information presented, wherever it is published.

For scientists, one way of deciding how important an article is has traditionally been which journal it has been published in. However, even prestigious journals publish work that is unreliable. Hence there are initiatives such as the San Francisco Declaration on Research Assessment which discourages judging papers only by where they are published.

For non-academic readers, understanding what to trust is even more challenging. Whether the article has been peer-reviewed is a good starting point.

Most important of all perhaps is the need for a modicum of common sense – the type of judgements we apply every day to claims about items in our daily lives: can I see the whole paper or am I just seeing an exerpt? How big was the study being reported? Do the claims seem sensible? Is the result backed up by other things I have read? And what do other experts in this area think of the research?

The Conversation

Virginia Barbour is Executive Officer, Australasian Open Access Support Group at Australian National University.

This article was originally published on The Conversation. Read the original article.

AOASG Response to Australian Government Paper “Vision for a Science Nation”

Earlier this year the Australian Government responded to the Chief Scientist’s paper, Science, Technology, Engineering and Mathematics: Australia’s Future, which was published in September 2014. The Australian Government’s response was entitled Vision for a Science Nation and responses were invited to it.

The AOASG prepared a response, which specifically focusses on discussions around Open Access to the research literature. The response is available below. If you would like a copy of the response or have feedback, please contact us eo@aoasg.org.au

Australasian Open Access Support Group Response to:

Vision for a Science Nation – Responding to Science, Technology, Engineering and Mathematics: Australia’s Future

July 2015

The AOASG is encouraged that both this paper and the Chief Scientist’s recommendations include reference to the importance of access to research. Professor Chubb divided his report into four sections:

  • Building competitiveness
  • Supporting high quality education and training
  • Maximising research potential
  • Strengthening international engagement

He made one specific recommendation with respect to open access, under the Maximising research potential section where he recommended the government should:

“Enhance dissemination of Australian STEM research by expanding open access policies and improving the supporting infrastructure.”

In addition, he referenced the need for IP regimes to support open access under the recommendation Building competitiveness section, where he noted the need to:

“Support the translation and commercialisation of STEM discoveries through: “a modern and flexible IP framework that embraces a range of capabilities from open access regimes to smart and agile use of patent and technology transfer strategies.”

In its response we note that the Government indicated two areas where it would increase access to research:

Australian competitiveness

“The Government is implementing a strategy to improve the translation of research into commercial outcomes by…

developing a plan to provide business with greater online access to publicly funded research and researchers’ expertise;”

Research

“Enhancing dissemination of Australian research

Australia’s research councils and some Government science agencies have arrangements in place to ensure wide access to research publications arising from the research they fund or conduct. There is no comprehensive policy covering all publicly funded research.

The Government will develop a policy to ensure that more publicly-funded research findings are shared openly and available to be used commercially or in other ways that will bring the greatest benefit to Australians.”

AOASG comments

These recommendations and the responses come at crucial time for developments in research publishing and access policies globally, with a vigorous ongoing international debate.

Scholarship is at a crossroads. The research outputs from publically and privately funded research are often locked behind paywalls preventing new research opportunities for those without access to libraries with large budgets and excluding those in developing countries from the publically funded knowledge that is produced as a result of government research funding.

The UK model of Gold Open Access is unfundable and unsustainable. The results of studies by the Wellcome Trust [1] and RCUK [2] show that more than £UK15 million was spent by  RCUK in 2013/4 on  costs of Gold Open Access publishing with a large proportion (and the highest article processing charges) being spent on “hybrid” Open Access – i.e. payment to traditional publishers for single articles within a subscription journal.  Despite such models most of the world’s research remains inaccessible as current models reward publishers for limiting access to research.

There are models that Australia should use to increase access to research. Science Europe’s Social Sciences Committee Opinion Paper “The Need for Diamond Engagement around Open Access to High Quality Research Output” [3] highlights the need for partnership between policy makers and publishers to facilitate deposition in repositories; standardisation and interoperability of research information metadata; and the need to build on infrastructures and networks already in place. Other models are possible and are being tried. For example, Knowledge unlatched [4] is a completely different open access book publishing model which uses library purchases to pay for the first copy to be published and made available open access subsequently to all. This model has developed to ensure valuable scholarly works continue to be published and available in an environment where commercial publishers’ sales targets, and not academic merit alone, can be a significant factor in the decision as to whether a scholarly monograph is published or not.

Increasing access to research has benefits across all of Australian society and potentially can provide value in all of the areas highlighted by the Chief Scientist – competitiveness, high quality education and training, research and international engagement.

In order to have the maximum effect on all these areas, the Government needs to adopt principles as it seeks to develop a policy on Open Access for Australia.

  1. Open Access must be implemented flexibly. It is becoming clear that there will be no one single solution for Open Access, but rather it will need a number of different models within an environment where the default is “Open”. What is currently lacking however is sufficient funding to develop new experiments and support innovative solutions. The Government should encourage and make available financial support for the development of multiple solutions, through funded experiments where needed and support for functioning, already established solutions. Examples of experiments include Knowledge Unlatched [4] for the publication of books in the humanities and SCOAP3 for Particle physics [5].
  1. Green Open Access, providing access via university repositories is currently the most well established mechanism for providing access to the diverse outputs of Australian Universities. The investment in repositories is currently through individual universities, delivering a fragmented landscape without a cohesive infrastructure and resulting in delays in implementing, for example, technologies for different metrics to provide information on impact. Repositories need to be able to innovate develop within an international and national environment. They should be part of the research infrastructure roadmap and a national project and program is required. It needs to link to international work such as that of COAR [6].
  1. There may also be a case for support of Gold Open Access journals via article processing charges (APCs) publishing in some circumstances, especially from innovative, not for profit or society publishers. However, Universities currently have little ability to support APCs, given their current commitment to the payment of journal subscriptions.
  1. Any policy on Open Access should not be aimed at providing access to just one sector (e.g. science or business). Open Access to Australian research outputs including older research material in collections is also a key component of improving education and engagement in science in Australia and any policy therefore should aim to increase access across all of Australian society. In addition, increasing global access to the research from Australia plays a role in international engagement.
  1. Reuse and machine readability of Open Access work is a critical issue in order to maximise its usefulness. Currently, many works that are labelled “Open Access” are in fact only free to read, in that they do not have an associated license that enshrines right to reuse, mine and build on the work – and may only be free after an embargo period. The Government should build on work by its existing Licensing Framework, AusGoal [7] and encourage the development of policies across the University sector that require all work to be licensed in such a way, under Creative Commons licensing [8], that enable reuse. We believe this fits into the Chief Scientist’s recommendation for “a modern and flexible IP framework”.
  1. Lack of interoperability and as yet patchy uptake of some infrastructure initiatives are holding back Open Access development. The Government should support the development and implementation of standards and interoperability initiatives in key areas such as exchange of data within and between national and international repository networks (as currently being led internationally by COAR, 6), facilitation of deposition of articles in repositories, as well as essential infrastructure, such as the uptake of ORCiD [9] identifiers for researchers. It is also important that any Open Access policy is developed in conjunction with current initiatives on open data publishing.
  1. The role of supporting particularly Early Career researchers needs consideration and development. Mechanisms to support these researchers are required to enable maximum benefit for the future of Australian research.
  1. Any development in Open Access should also be considered in parallel with ongoing developments such as those on metrics and incentives within research. There has been much anxiety among scientists that new ways of publishing and dissemination are not adequately rewarded by their institutions and funders and the Government should encourage a culture whereby being “Open” is supported and rewarded. The UK’s HEFCE has recently published a report with recommendations for the use of metrics in the UK’s higher education sector. [10]

References

  1. Wellcome Trust The Reckoning: An Analysis of Wellcome Trust Open Access Spend 2013-14
  2. Research Councils UK 2014 Independent Review of Implementation
  3. Science Europe’s Social Sciences Committee Opinion Paper The Need for Diamond Engagement around Open Access to High Quality Research Output
  4. Knowledge Unlatched http://www.knowledgeunlatched.org/
  5. SCOAP3 – Sponsoring Consortium for Open Access Publishing in Particle Physics http://scoap3.org/
  6. Confederation of Open Access Repositories (COAR) https://www.coar-repositories.org/
  7. AUSGOAL http://www.ausgoal.gov.au/
  8. Creative Commons Australia http://creativecommons.org.au/
  9. Open Researcher and Contributor ID (ORCiD) http://orcid.org/
  10. HEFCE The Metric Tide

How researchers can protect themselves from publishing and conference scams

Roxanne Missingham, University Librarian at ANU and AOASG’s Deputy Chair, provides practical advice to researchers on how to prevent exploitation through being published in a journal, or participating in a conference, that could be considered “predatory” or “vanity”.

With the evolution of open access, enterprises have emerged that run conferences and journals with low or no peer review or other quality mechanisms. They approach academics, particularly early career academics, soliciting contributions for reputable sounding journals and conferences.

On 2 August, the ABC’s Background briefing highlighted the operation of this industry, Predatory publishers criticised for ‘unethical, unprincipled’ tactics” focusing in particular on one organisation, OMICS. There is little doubt that the industry has burgeoned.  The standard of review in such unethical journals can best be described by the example of the article written by David Mazières and Eddie Kohler which contains basically the words of the title repeated over and over. The article was accepted by the International Journal of Advanced Computer Technology and the review process included a peer review report that described it as “excellent”. You can see the documentation here. Not only will these publishers take your publications, they will charge you for the pleasure (or lack of).

Jeffrey Beall, librarian at Auraria Library, University of Colorado, Denver, coined the term “predatory publisher” after noticing a large number of emails requesting he submit articles to or join editorial boards of journals he had not heard of.  His research has resulted in lists – “Potential, possible, or probable predatory scholarly open-access publishers” and “Potential, possible, or probable predatory scholarly open-access journals”.

While Beall’s lists have been the subject of some debate, acknowledging publishers that are low quality is important to assist researchers. The debate on predatory publishing does not mean that open access publishing is poor per se. There are many high quality open access publishers, including well established university presses at the University of Adelaide, the Australian National University and University of Technology, Sydney.

Ensuring the quality of the journals you submit to and conference you propose papers for is important to assist you in developing your research profile and building your career.

And don’t forget, traditional publishers can also have problems of quality. For example, in early 2014 Springer and IEEE removed more than 120 papers after Cyril Labbé of Joseph Fourier University in Grenoble, France, discovered  computer-generated papers published in their journals.

How can you prevent this happening to you?

Three major tips are:

  • If you haven’t heard of the journal or conference check Beall’s list or ask your local librarian
  • Don’t believe the web site – ask your colleagues and look at indicators of journal impact. A library’s guide to Increasing your research impact with information on Journal measures and tools can help you
  • Don’t respond to unsolicited emails – choose the journals you wish to submit to.

If in doubt contact your local Library or Research Office.

The Australasian Open Access Support Group is committed to supporting quality open access publishing and will continue to provide information through this web site and in our twitter, newsletters and discussion list.

Roxanne Missingham, University Librarian (Chief Scholarly Information Services), The Australian National University, Canberra.

UK Open access: review of implementation of the RCUK policy

Screen Shot 2015-04-14 at 10.31.55 pmOpening up access to research outputs is undoubtedly vital to achieve optimal return on public investment in research, increase national benefit from research, increase international visibility and future research collaboration.

The UK has taken a difference approach to open access than that of most other nations. The April 2013 policy of the seven UK funding councils under the umbrella term Research Councils UK  (RCUK), follow the recommendations of the 2012 Finch report to the UK Government. The policy has been the subject of much scrutiny because it was a policy position based on the gold model which required funding – it is funded primarily by block grants to UK institutions to pay article processing charges – APCs. As Richard Poynder has noted the Finch Report “ignited a firestorm of protest, not least because it estimated that its recommendations would cost the UK research community an additional £50-60 million a year”.

The Research Councils UK have now published an independent report on the first 16 months of the implementation of their OA policy – one that adds a remarkable amount of data.

Care is taken by the authors of the report to state that it is not a review of the policy, nor to enter into a debate about green versus gold access. The scene is set by the existing policy, although the figures do, in themselves, raise issues about the approach the UK has taken to open access. On a positive the reports says “One common factor amongst all stakeholder groups was a general acceptance and welcome given to the concept of open access.”

The headline figure is significant – £UK16.9 million was spent from the UK Open Access fund in 2013/14. While the early concerns about reaching around half a million pounds has not been reached, it is a very significant expenditure. It should be noted that the limit on costs is affected by a wide variety of factors including organisational capabilities.

Studying slightly over half of the institutions funded through the Fund (55 of 107), the report identifies that implementation of the policy occurred without a streamlined cost-effective monitoring or data collection process. Parallel systems for gold and green access appear to have caused complexities and confusion. The report notes “it is apparent that larger, more distributed organisations have been unable to fully track publications that have been made open access through the deposit of author final manuscripts in repositories”.

The data on the actual APC costs is revealing:

  • Maximum average  APC is £UK3,710
  • Minimum average publisher APC is £UK72
  • Median average publisher APC is £UK1,393

Perhaps the most interesting figure is the number of publishers who received revenue from the fund:

  • 80% of papers were from 14 publishers
  • 90% were from 24 publishers

Who are the largest recipients of APCs (ie publishers)? The report lists the largest recipients in a very useful appendix – Elsevier and Wiley are by far the largest and only two fully open access publishers, PLOS and BMC, are in the top 10:

rcuk

Clearly with the concentration of publishing in the hands of such a small number of publishers any change in payment policy and practice of fewer than 10 of them for example would have an extremely strong impact on the system.

Written and oral evidence to the review panel found that academics were “’confused’ and ‘disengaged’” in relation to the policy. There was especial confusion over the licenses required. This raises the question of the role of RCUK and other research bodies  and institutions in communicating the policy and the transparency of OA on impact. Danny Kingsley in her blog post on the report notes that the report’s comment that the RCUK preference for gold is a barrier to implementation is anecdotally found at Cambridge University.

Four major areas raised by the report are very important for future developments in OA.

The first is undoubtedly the cost and who receives the funds. The £UK16.9 million has been paid to publishers for many in addition to the revenue received through traditional processes such as library subscriptions and author payments. Double dipping has been the subject of considerable debate by RLUK and others it is notable that (as in Wellcome’s report) and the highest APCs were for hybrid journal articles. The fundamental question raise by librarians has been around whether it is sustainable to increase revenue to a fundamentally small number of publishers.

Second, the sustainability of the existing model. There are signs that publishers may be open to approaching funding of scholarly publishing differently. The Jisc project on the total cost of ownership seeks to develop models where payments to publishers are negotiated on the basis of reducing subscriptions to balance the open access payments. Springer and Jisc have announced a new arrangement to implement a model that takes account of the open access payments.

Third, the issue of embargoes is central to future developments of green and gold access. The report note the continuing concerns of humanities and social sciences researchers about short (i.e. 6 or 12 month) embargoes. A discussion of continued long embargoes is included in the report as well as in the HEFCE commissioned report on monographs and open access. The argument from scholarly societies is around ensuring continued revenue from journal publishing – based on the assumption that primary revenue model in some areas will continue to be based on subscriptions. The report notes that the “panel feels that there is not enough information available at this early stage to come to an evidence-based conclusion on the issue of embargoes and, therefore, its recommendation is to ensure that continued attention is given to the matter in subsequent reviews”

Fourth, it is clear that there is substantial administrative burden associated with the policy implementation and compliance monitoring – for researchers, institutions and the funders involved. The report recommends further thinking in this area, but specifically suggests that RCUK should mandate the use of ORCiD identifiers (something that has just been supported by Australia’s NHMRC and ARC)

This is a must read for all interested in OA and its costs. As the reports says “the conversation on the need for an accelerated transition to open access is no longer one reserved to librarians and open science advocates, but has matured into an international collaboration” Whether the policy is a success or failure will depend upon your views – the costs are significant, however achieving access to 10,066 publications via fully gold open access in the first year (as well as 4,410 publications via the green route also reported in the review) is an important step forward in open access.

Roxanne Missingham, University Librarian (Chief Scholarly Information Services), The Australian National University, Canberra

Virginia Barbour, Executive Officer, AOASG

Published April 21, 2015

Open Access Scholarly Publishing

What is open access?

When used in relation to the dissemination of research findings, the phrase ‘Open Access’ refers to the practice of making the information freely available to anyone with an internet connection rather than leaving it hidden behind a subscription paywall.

Why is open access important?

Researchers formally share the results of their work by publishing it in the academic literature; primarily in the form of peer reviewed journal articles.  The research behind most of the articles produced in Australia is publicly funded but the vast majority of the articles are published in subscription journals which means that the information is only being shared with those who have a personal or institutional subscription.  By restricting access to only those who can afford to pay for access, the reach and impact of the research is severely constrained.   Practitioners such as pharmacists, teachers, nurses and business people are unable to see the latest developments in their field. Researchers in developing countries are unable to join the conversation.  Open access uses digital technology to maximise the visibility, accessibility and impact of research.

How is open access delivered?

The two main options for delivering open access include:

  • ‘Gold Open Access’ is where the published version of the article is freely available to anyone via the journal website.   If the journal is an open access journal, the entire contents of the journal will be freely available to all.  If the journal is a ‘Hybrid’ journal,  then only some articles will be freely available and a subscription will be required to read the full journal issue.  Some open access journals and all ‘Hybrid’ journals charge authors a fee to make their article open access.
  • ‘Green Open Access’ is where the author uploads, to an institutional or discipline-themed repository, an open access copy of an article published in a subscription journal.  In most cases, the version uploaded will be the ‘author’s accepted manuscript’ (AAM) version (which includes revisions made as a result of peer review but not the formatting, branding and ‘value-adds’ contributed by the publisher). No payment is required but many publishers require an embargo period (commonly 12 months) before the AAM is made open access.

Open Access Mandates

Around the world, 90 research funding bodies, including the Australian Research Council (ARC) and the National Health & Medical Research Council (NHMRC) have made it a ‘condition of grant’ that articles arising from their funding are made open access.   In most cases, the obligation applies only to peer reviewed journal articles, but, in the case of the ARC in Australia, the obligation applies to all formats including books. Most funders will accept embargoes of up to 12 months, so researchers are free to choose between ‘Gold open access’ (using part of their grant to pay any article processing charges) or ‘Green open access’ which does not involve paying a fee (but researchers must upload the appropriate version to a repository).

Predatory Publishers: the ‘dark side’ of open access publishing

The availability of  open source journal publishing software, such as OJS (Open Journal Systems), has lowered the cost of establishing a new journal.  Most of the new journals that have been launched using this type of software are managed by groups of academics or scholarly societies.  Generally, they receive subsidies from the host institution which allows the journal to be fully Open Access;   i.e. free to readers AND authors.

Unfortunately, a number of opportunistic entrepreneurs are exploiting the willingness of some research funders and universities to for ‘Gold Open Access” and launching new journals that are money making ventures disguised as scholarly journals.   These journals claim to be peer reviewed but articles are generally all accepted without revision provided the author pays the, generally modest, article processing charge.  Articles containing serious flaws and plagiarised content have been linked to these so called ‘predatory publishers’ as a consequence of the absence any quality control mechanisms.  While these journals represent less than 3% of all the Open Access journals currently available, it is essential that researchers (especially early career researchers) learn how to identify potentially bogus journals.   Clues that a journal may not be truly scholarly include:

  • Journal is not listed in standard periodical directories (eg Ulrichs) and not indexed by the major indexes (eg ProQuest, EBSCO, Scopus, Web of Science).
  • Journal does not identify a formal editorial / review board.
  • Journal’s claims to publish articles within an improbably short timeframe (eg 21 days)
  • Journal claims to have an ‘impact factor’ when they are using metrics with no international standing ( eg Global Impact factor, Index Copernicus, View Factor etc) .
  • Journal falsely claims journal is indexed in legitimate abstracting and indexing services or claims that its content is indexed in resources that are not abstracting and indexing services.
  • Journal/publisher sends email requests for manuscripts, peer reviewers and editorial board members to scholars in unrelated disciplines.
  • Journal publishes papers already published in other venues/outlets without providing appropriate credits.
  • Publisher claims to be a “leading publisher” even though it is a novice organization.
  • Journal has a ‘shop front’ in a Western country for the purpose of functioning as a vanity press for scholars in a developing country.
  • Publisher does minimal or no copyediting.
  • Journal’s “contact us” page does not reveal its location.
  • The journal/publisher website includes spelling and grammatical errors.

For more information about predatory publishers (including a list of ‘suspect’ companies), refer to the website maintained by Jeffrey Beall, a librarian in Colorado. http://scholarlyoa.com/publishers/

This work by Paula Callan is licensed under a Creative Commons Attribution 4.0 International License.

This post is also available as a downloadable WORD document: Open Access_Briefing_Paper