An OA publisher’s perspective on CC-BY

As the Director of the University of Adelaide Press, I am participating in the Humanities and Social Sciences session in the OASPA (Open Access Scholarly Publishers Association) conference in Latvia, Riga later this month.

OASPA was initially set up to group together Open Access journal publishers, and now is keen to include book publishers, in all disciplines.

At present, which is why I am speaking, OASPA require that their members not only have published at least one Open Access book, but also that it is published with a licence that allows the “broadest re-use of published material possible”.

Their preference is the ‘CC-BY’ licence, now required by the United Kingdom funding bodies if they fund research, the European Union, and increasingly other funding bodies around the world.

I do not believe this licence is automatically appropriate for Humanities and Social Sciences which generally publish in books. 

‘Open Access’ as a term was formally adopted in the Budapest Open Access Initiative in December 2001, with the aim of assisting faster advances in Sciences, Medicine and Health. 

Subsequently, the six Creative Commons licences were created, to provide globally coherent copyright licences.

I do not have a quibble with the most open licence of all, the CC-BY licence, when it is used in Sciences, Medicine and Health.  Or for that matter, if any author in the Humanities and Social Sciences wishes to use it.

My quibble is when it is mandated to all of us to use.  I disagree flatly and categorically that when there are six different Creative Commons licences that only one must be used.

The CC-BY licence not only allows all readers the free, open access to the text, and to share it and quote it, but also to adapt it and create what they call, mysteriously, “derivative works”.

There is no requirement for these derivative works to be subjected to the same rigorous peer-reviewing before publishing that the original work had to pass.

It also allows them to commercialise their derivative work, without needing to share the profits with the original author, the only condition being that they attribute the original work.

No one yet has explained what a derivative work is to me, and even in the legal language of the licence itself, it remains a vague term.

This licence is undoubtedly perfect when applied to the results of fast-moving medical research, for example in genetics.

But it could equally allow an unscrupulous publisher to patch together a very good textbook and make a killing, probably selling it back to the same institutions that produced the original scholarly texts.

We already are more than aware of the way institutions are forced to buy back their own research in journal packages, in which they did not pay for the content, indeed charged a fee to publish it, then took ownership of the copyright, and also receive subsequent copyright use payments – like through the Copyright Agency Limited (CAL).

As an author of books myself, I am concerned about a licence allowing someone to take the results of years of original research, largely written in donated time, reword it, change it, and then turn a profit from it.

Of the people I have talked to who insist on the single use of the CC-BY licence, I wonder how many of them have published a book? 

On the other hand, I believe that the Creative Commons licences are essential for Open Access publishing to work efficiently and effectively.  The University of Adelaide Press will be introducing them in future titles, but will allow authors to choose which one suits their work.

John Emerson
Director, University of Adelaide Press

Four issues restricting widespread green OA in Australia

Australia is a world leader in many aspects of open access. We have institutional repositories in all universities, funding mandates with the two main funding bodies, statements on or mandates for open access at a large number of institutions and a large research output available in many open access avenues. A summary of centrally supported initiatives in this area is here.

However we can do more. This blog outlines four impediments to the widespread uptake of open access in Australia: a lack of data about what Australian research is available, Copyright transfer agreements, the academic reward system and improved national discovery services. We suggest some solutions for each of these issues.

Issue 1 – Lack of data about what Australian research is available OA.

We collect good data in Australia there is good data about the amount of research being created and published annually. Equally, a considerable amount of Australian research is being made available to the wider community through deposit of research in institutional repositories, subject based repositories (PubMed Central, arXiv , SSRN and the like), through publication in open access journals.

However, this information is not compiled in a way to ascertain:
1. What percentage of current Australian research is available open access.
2. Where Australian research is being made available (institutional or subject-based repositories and open access journals).
3. The disciplinary spread of open access materials – an important indicator of areas needing attention.

Without this information it will be difficult to ascertain the level of impact the ARC and NHMRC policies are having on the availability of open access material from current Australian research. There are three actions that could help inform this area.

Solution 1

First it would be enormously helpful to know the percentage of Australian publications that are available open access.

There have been two definitive studies published on worldwide open access availability. Björk et al’s 2010 study concluded that 21% of research published in 2008 was openly accessible in 2009. Gargouri et al’s 2012 study found 24% of research was openly accessible.

But in these studies the method used to determine which work was available was to search for the items manually across several search platforms. This is clearly very time consuming. A study like this in Australia will require funding.

Solution 2

Second we need an easily accessible summary of the number of full text open access items in institutional repositories across the country. In an attempt to address this, the National Library of Australia aggregates research outputs from all Australian university repositories into Trove, and is working with the sector to improve discoverability and metrics around this collection. One challenge is that some repositories do not specify whether records have an open access full text item attached.

This issue was raised during a poll of repository managers in 2012. The poll found that as at June that year there were about 200,000 open access articles, theses and archive material (which includes images) in Australian university institutional repositories. Currently there is no automated way of obtaining an updated figure.

Solution 3

Third, a compliance monitoring tool needs to be developed to assist the ARC and NHMRC manage their open access policies. Currently all institutional repositories in Australia are implementing a standardised field in their repositories to indicate an item results from funding. But to date there is no indication of how this might be harvested and reported on.

Issue 2 – Copyright transfer agreements

As AOASG has already noted, there is a serious challenge keeping up with copyright agreements as they change. In reality, it is extremely difficult for an individual researcher to remain across all of the nuances of the copyright agreements. There have been studies to demonstrate that doing the copyright checking on behalf of the researcher increases deposits into repositories.

But the broader problem is actually two fold. First researchers often have little understanding of the copyright status of their published work. Many do not read the copyright transfer agreements they sign before publication. In addition, most researchers do not keep a copy of these legal documents. While there is currently some advice for researchers about copyright management, such as this page from the University of Sydney, generally awareness of copyright remains poor amongst the research community.

But before we start wagging our fingers at researchers, let’s consider the second, related issue. The copyright transfer agreements presented to researchers by publishers are often many pages long, written in small font and hard to understand. In addition these agreements are not consistent – they differ between publishers and often titles from the same publisher have different agreements.

Generally publishers ask researchers to assign exclusive copyright to them. But in most cases publishers only need the right of first publication of work, and normally do not need to control how research is used and distributed beyond this. There are options for researchers to negotiate different arrangements with their publishers, but the level of uptake of these in Australia is anecdotally very low.

It is highly unlikely there is any specific action that can force publishers to simplify their copyright transfer agreements. But there are a couple of actions the research community can make to improve the current situation.

Solution 4

It would help to have an Australian version of the SPARC Author Addendum tool which can be attached to copyright transfer agreements. This would need to be supported by a concerted education campaign about what rights researchers have, including training materials.

Solution 5

In addition the many researchers in Australia who work as editors for scholarly journals are in a good position to negotiate these arrangements with their publishers on behalf of their authors. An education campaign aimed at journal editors would assist them in this action.

Issue 3 – The academic reward system

The academic reward system supports the current publishing status quo. Widespread uptake of open access will continue to be a challenge while this is the case. A reliance on a numerical analysis of the number of articles published in journals with high Journal Impact Factors as a proxy for quality assessment is a narrow and limiting system.

There are many issues with the Journal Impact Factor. It also causes challenges for open access is it retains emphasis on a small number of specific journals which are in, the vast majority, subscription based. Yet there is evidence to show that open access & subscription journals of the same age have the same impact, indicating that it is time to look at other methods of assessing quality.

Currently the markers used to assess promotion do not differ much from those used for grant allocation. However, the contribution made by researchers to their academic community reaches far beyond simply their publication output. This includes editing journals and the peer review of papers. As there is currently no quantification of this work, the extent of the problem is unknown, although concerns about work overload have been expressed by the academic community. There are serious implications for the sustainability of scholarly publication in terms of human capital.

Solution 6

We need to move to assessment based on article level metrics rather than the organ of publication. It would be helpful if assessments such as ERA and funding allocation were to embrace new, existing alternative metrics. Examples include: Impact Story, Plum Analytics, PLOS ALM, Altmetrics and Google Scholar.

Solution 7

Institutions could consider recognising the hidden work academics undertake supporting the publication process in their promotion rounds. Recognition of peer review and editing roles as well as those researchers who are also publishing journals by running an open access journal using OJS or the like would add value to these activities and make the scholarly publication system more sustainable.

Issue 4 – Improved national discovery services

This last issue is in some ways, related to the first – knowing more about where the research we are producing is ending up. But it has a broader remit, for example incorporating data as a research outcome. Currently researchers can register their data with Research Data Australia which lists over 87,000 collections from over 23,000 contributing research teams.

We need to move beyond simply collecting research, and start working on ways to link data as research outcomes to reports on research publications.

During 2004 and 2008 the Australian Partnership for Sustainable Repositories (APSR) provided assistance and support for the repository community and developed technical solutions relating to interoperability and other repository issues.

APSR was supported by Systemic Infrastructure Initiative (SII) funding [note original post said NCRIS funding – thanks to David Groenewegen for pointing out the error. Amended 16 August]. When this ended, repository manager support was taken over by CAIRSS, financed in 2009-2010 by remainder money from another SII funded project, Australian Research Repositories Online to the World (ARROW) . The university library community through CAUL continued to support this project in 2011-2012 and the work has now been folded into the responsibility of another CAUL committee.

But the work APSR did developing country-wide technical solutions has not continued. Currently repositories around the country are being developed and maintained in isolation from one another.

Solution 8

An investment in current institutional repositories to increase functionality and interoperability will assist compliance with mandates (both Australian and international) and usability into the future. It will also enable a resolution of the metadata issue for country-wide harvesting by Trove.

Solution 9

We suggest revisiting support for country-wide technical development of solutions to common problems facing repositories throughout Australia. An example of a project that could be undertaken is the Funders and Authors Compliance Tool developed in the UK – SHERPA/FACT. This assists researchers to comply with open access mandates.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Accessibility is more than making the paper OA

Journals

Proponents of open access generally agree that there are many benefits to open access, but discussions about the processes involved in achieving open access often stop at making the published research available. But what happens when the issues of accessibility are considered?

A remarkable project is underway in Australia, spearheaded by the Australian chapter of a not-for-profit international development organisation, Engineers Without Borders (EWB). The Open Journal Project aims to explore and promote techniques to make academic information genuinely open and accessible – with a focus on groups that are often excluded from access to this type of information.

EWB is a volunteer organisation, sending volunteers overseas to a local non government organisation to work on the ground on a project. The Open Journal Project considers the needs of individuals and practitioners in other countries.

“The Project doesn’t finish the day you press publish – that’s when it starts,” explained Julian O’Shea, who is the Director of the EWB Institute, the education, research and training section of EWB, and is heading up the Project.

“We are thinking about what we can do to make the work more accessible.”

The EWB Institute is based in Melbourne, and is publishing a peer-reviewed journal as a pilot and case study in their work. The Journal of Humanitarian Engineering (JHE) is piloting innovations in open access, including multi-language access, developing country access, low-bandwidth websites, and disability-accessible content.

“We want to pilot innovations and share our experiences with doing this,” explained Julian. “We want to work out what is world’s best practice, do it and live it and show it is not too hard.”

The problem

The EWB has noticed that practitioners overseas are under-served by the current publishing process. As an example, Julian stated that the leading university in Cambodia does not have access to the largest database in the field of engineering.

The idea for the focus of the journal began because the group saw there were very few that focused on experience, drawing on outcomes from developments and disseminating that information.

“The aim of the journal is not to be published or cited, but to provide outcomes in communities,” explained Julian. “This is different to other research organisations as a metric of success. It gives us a different angle or lens.”

The group wanted to encourage this as a field of research in academia. They were not sure what level of interest there would be in the journal because from a purely technical point of view they are not publishing innovative technologies. Rather, the focus is on new ways of applying this technology.

“We have been surprised and pleased that the journal has been really positively responded to,” said Julian.

Open access

The journal is published open access, with no cost to the author or to the reader. It uses an open source program called Open Journal Systems to run the administration of the peer review and publication. All papers in the journal are available under a CC-BY 3.0 license.

“We have had no negative feedback at all from people wanting to publish in the journal,” said Julian. “People doing this kind of research don’t have any issue with making their work freely available.”

Accessibility – language issues

Academic papers can be difficult to read even for people within a field. They can become impenetrable to researchers in parallel fields. This problem is further exacerbated in an international environment, working with practitioners on the ground who may not have any tertiary education.

There are several issues with language. The first is the problem of making the technical reports understandable to the lay person. Often the papers in this area are very technical, including many equations, and can reach 300 pages.

To solve this problem authors in the JHE are required to submit a two page plain summary about the paper with the formal paper. This means a project manager on the ground can make a decision about applying the technology or approach and then pass the full paper on to the technical manager.

But many of these projects are in countries where English is not the primary language. The Project addresses this by making the reports available in the language of the country it is targeted towards. The Project translates the plain language guides into both the local language of most importance and into other general languages.

The Project called on goodwill to obtain the translations. They sent articles out to the world, asking for volunteers to translate the papers. This had a good response from universities, companies or simply people to help out on the website.

The Project now has an approved translator list. The first time an article is translated it is sent to a native speaker to approve it and once this is done the translator can go onto the list. The quality of translations has been very high, said Julian, with only one that had to be sent back.

To date the plain language summaries have been translated into Indonesian, Italian, Portuguese, Russian, Hindi, Chinese, Spanish, Danish, Khmer and French. The number of languages is growing.

Accessibility – distribution

Another consideration is bandwidth. In many countries the internet connection is through a mobile telephone which prevents the download of large documents. The Project decided to produce the journal on a low bandwidth, and this opened up new issues.

“Generally the journal system distributes through pdfs,” explained Julian. “The problem is it is all or nothing – if the download cuts out at 90% you get nothing.” So the Project looked at releasing html versions of the papers. This has reduced the size of the website to 4.3KB and the journal articles are about 18KB. “We can put about 80 journals onto a floppy disk,” he said.

The Project also has plans to further improve the distribution to remote areas. “We haven’t done it yet but we will have a system that says ‘here’s a postcard, send it to us and we will send the paper to you by donkey’”, said Julian.

Accessibility – inclusion

With a philosophy of sharing research, it was important to the project to provide versions of the papers in an accessible format for people with disabilities.

The choice of publishing html versions of papers assists people with vision impairment, as they translate better using text-to-talk programs than pdfs.  In addition, the project is being proactive about embedding helpful metadata within the document such as describing images.

The Project has used the guidelines for Vision Australia to release a large print edition of the papers. “The first one took about couple of minutes – after that it was very simple,” said Julian. “That is what we are trying to show in this project, to meet a need for some people can be solved in literally two minutes.” The team has also produced Braille editions of the plain language guides.

Future plans

The project hopes to share their experience and inspire others. “We are doing this through the case study approach,” said Julian.  “This is my goal – to be able to communicate better. I am an author – what can I do? I am a publisher – what can I do?”

The Open Journal Project is hoping to formally launch later this year. Meanwhile, Volume 2 issue 1 is about to be released.

Twitter handle – @OpenJournal

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Lost & found: challenges accessing government research

While there’s been much angst about the locking away of academic literature and sky-high fees for libraries to access academic journals, what about all the other sources of publicly-funded material? Why are they also not included in the brave new world of open access?

As a PhD student working in a reasonably cutting-edge area, grey literature* is my life-blood. And yet when it comes to some key sources who take money from public coffers for their work, getting access to material that should be public domain is tricky at best.

My area of interest – not-for-profit, non-government hospitals and large scale clinics in developing countries – has not generally been the focus of briefing papers and articles. But often these health facilities are included in documents for various reasons without being the focus. And given the dearth of directly relevant data, I’m prepared to take what I can get – or at least what I can find.

Government Double Standards?

While recipients of Australian Government funds for research now have an obligation to allow open access, the same can’t be said for government departments, which are encouraged, but not required, to make their work open access. 

Try checking AusAID’s website for their list of advertising projects or FOI procedures and requests or this page on consultation arrangements. The links lead you either to a blank page or an announcement that the information will be added when it becomes available.

And that’s just scratching the surface of the problem. A significant amount of research is now outsourced to specialist consulting firms or hubs at academic institutions. What that means in practice is we have no idea how much information isn’t making it onto indexes on government websites.

As part of my research I went to AusAID looking for any information they might be able to contribute. I should stress the staff I dealt with were professional and went out of their way to check for me. But the end result was a direction to an outside body, the Nossal Institute,  a health knowledge hub for AusAID. After I found some useful reports on Nossal’s website, I went back to the AusAID publications area and searched for them using keywords from the title. Nothing. I searched under health. Nothing. The document register similarly yielded nothing.

So what happens to members of the public who don’t know AusAID has a librarian to ring and ask for advice? Or who doesn’t make the connection between AusAID and Nossal or any other body contracting to AusAID for that matter?

Your ability to track down information funded by the Australian taxpayer shouldn’t be dependent on how ‘in the know’ you are. Whether you’re a researcher or a tradie, these documents should be easy to access.

It’s in the Report

The sad reality is that even when you finally find the document you’re after, you probably won’t be getting the full picture. As anyone who has ever done research will tell you, there’s a lot that misses the final cut. What happens to that uncaptured knowledge?

When all the researchers were in-house, that institutional knowledge collected along the way stayed within the institution. But now, it dissipates out to a complex web of contractors and partner organisations. So what hope does anyone outside the organisation have of tracing detail that didn’t fit the word limit?

Make an Appointment

I imagined a world where I could ring the librarian, put in a formal request to get access to the library and come and thumb the physical pages, letting the Dewey decimal system lead me from one title to another and maybe even hit the jackpot with a title I would never have thought to search for. Or better still, in a face-to-face conversation with that gatekeeper of knowledge, the librarian might plant a thought that led me to the holy grail. Apparently not.

Along with the outsourcing of much research capacity, the AusAID library now resides off site, so even staff put in requests for books to be retrieved and brought in. While it makes sense for archival or rarely accessed material, there are some titles that could and should be read often. And yes, there are electronic books, but not everything comes in e-book format, not to mention the costs if every individual in an organisation paid for an e-book every time they wanted to read a few prescient pages.

While I’ve focussed on AusAID here, I gather from anecdotal conversations with departmental staff and fellow researchers that this experience is far from rare. I’ve singled out AusAID purely because of my recent interaction with them as a source.

And now the good news…

I was preparing to be less than glowing about the World Bank’s open access. I started by writing that the World Bank had an obligation, given their highly specialised research, to make all their reports accessible for free.

As a frequent user of the site in the past, when I started searching the site again I went straight to the publications catalogue. I was appalled that it still cost $100 to get a report as crucial as African Development Indicators. The best they seemed to offer on the online bookshop was a ‘geographic discount’ for developing country purchasers.

What I missed in the catalogue was the announcement on the inside cover page that ‘most publications are now available for free online’. I ended up stumbling on to the Open Knowledge Repository area of the website which is well designed, easy to search and remarkably had the vast majority of reports published by the World Bank available to download free.

There are some exceptions in the open access policy. Open access only applies to external research when that research was commissioned on or after July 1, 2012 which presumably leaves some research still being undertaken now exempt from the rules. However given the volume of current and historical material available free it seems the Bank has worked hard with its authors to get their consent to publish full reports online.

My one criticism is that this needs to be better flagged on the site, and particularly in the online bookshop. Over-familiarity with the old site led me to miss these changes – like many researchers I can be guilty of being a ‘mongrel reader’ and skipping straight ahead if I think I know a website well. The ‘read and share this’ button looked to me like a clunky piece of advertising rather than an invitation to download the research.

So the upshot is that global organisations like the World Bank, with their multitude of stakeholders, are making huge gains rapidly, while Australian government departments are still lagging behind. It’s time government departments similarly made significant inroads into genuine open access.

* Grey literature is defined as ‘ … document types produced on all levels of government, academics, business and industry in print and electronic formats that are protected by intellectual property rights … but not controlled by commercial publishers i.e., where publishing is not the primary activity of the producing body.’ –  12th International Conference on Grey Literature at Prague, December 2010

Belinda Thompson
PhD Scholar
Menzies Centre for Health Policy
Australian National University

Shall we sing in CHORUS or just SHARE? Responses to the US OA policy

Well things certainly have been moving in the land of the free since the Obama administration announced its Increasing Access to the Results of Federally Funded Scientific Research policy  in February.

In short, the policy requires that within 12 months US Federal agencies that spend over $100 million in research and development have to have a plan to “support increased public access to the results of research funded by the Federal Government”. (For a more detailed analysis of that policy see this previous blog.)

In the last couple of weeks two opposing ‘solutions’ have been proposed for the implementation of the policy.

In the publishing corner…

A coalition of subscription based journal publishers has suggested a system called CHORUS – which stands for Clearing House for the Open Research of the United States. The proposal is for a “framework for a possible public-private partnership to increase public access to peer-reviewed publications that report on federally-funded research”.

The plan is to create a domain called CHORUS.gov where publishers can deposit the metadata about papers that have relevant funding. When a user wants to find research they can look via CHORUS or through the funding agency site, and then view the paper through a link back to the publishers site.

While this sounds reasonable the immediate questions that leap out is why would this not be searchable through search engines, and what embargo periods are being held on the full text of publications? The limited amount of information available on the proposal does not seem to address these questions.

The Association of American Publishers released their explanation of the proposal ‘Understanding CHORUS’ on 5 June. There is not a great deal of other information available, although The Chronicle published a news story about it.

The Scholarly Kitchen blog – run by the Society for Scholarly Publishing – put up a post on 4 June 2013 with some further details. According to the post the CHORUS group represents a broad-based group of scholarly publishers, both commercial and not-for-profit There are 11 members on the steering group and many signatory organisations. The blog states the group collectively publishes the vast majority of the articles reporting on federally-funded research.

The time frame is fast, with plans including:

  • High-level System Architecture — Friday, June 14
  • Technical Specifications — Friday, July 26
  • Initial Proof-of-Concept — Friday, August 30

On this blog there is the comment that CHORUS is:

a much more modern and sensible response to the demand for access to published papers after a reasonable embargo period, as it doesn’t require an expensive and duplicative secondary repository like PubMed Central. Instead, it uses networked technologies in the way they were intended to be used, leveraging the Internet and the infrastructure of scientific publishing without diverting taxpayer dollars from research budgets.

Not surprisingly the comment coming from commercial publishers about diverting taxpayer dollars from research budgets has attracted some criticism, not least from Stevan Harnad in his commentary “Yet another Trojan Horse from the publishing industry” :

And, without any sense of the irony, the publisher lobby (which already consumes so much of the scarce funds available for research) is attempting to do this under the pretext of saving “precious research funds” for research!

Harnad’s main argument against this proposal is that it represents an attempt to take the power to provide open access out of the hands of researchers so that publishers gain control over both the timetable and the infrastructure for providing open access.

Mike Eisen in his blog on the topic points out that taxpayers will end up paying for the service anyway:

publishers will without a doubt try to fold the costs of creating and maintaining the system into their subscription/site license charges – the routinely ask libraries to pay for all of their “value added” services. Thus not only would potential savings never materialize, the government would end up paying the costs of CHORUS indirectly.

Harnad notes that this is a continuation from previous activities by publishers to counter the open access movement, not least the 2007 creation of PRISM (the Partnership for Research Integrity in Science and Medicine)  which grew from the American Association of Publishers employing a public relations expert to “counter messages from groups such as the Public Library of Science (PLoS)”

In the university corner….

Three days after the Scholarly Kitchen blog, the development paper for a proposal called SHARE was released from a group of university and library organisations.

The paper for SHARE (the SHared Access Research Ecosystem) states the White House directive ‘provides a compelling reason to integrate higher education’s investments to date into a system of cross-institutional digital repositories’. The plan is to federate existing university-based digital repositories, obviating the need for central repositories.

The Chronicle published a story on the proposal on the same day.

The SHARE system would draw on the metadata and repository knowledge already in place in the institutional community, such as using ORCID numbers to identify researchers. There would be a requirement that all items added to the system include the correct metadata like: the award identifier, PI number and the repository in which it sits.

This type of normalisation of metadata is something repository managers have already addressed in Australia, in response to the development of Trove at the National Library of Australia which pulls information in from all Australian institutional repositories. Also more recently here, there has been agreement about the metadata field to be used to identify research from a grant to comply with the NHMRC and the ARC policies.

In the SHARE proposal, existing repositories, including subject based repositories, would work together to ensure metadata matching to become a ‘linked node’ in the system. The US has a different university system to Australia with a mixture of private and state-funded institutions. But every state has one or more state-funded universities and most of these already have repositories in place. Other universities without repositories would use the repository of their relevant state university.

A significant challenge in the proposal, as it reads, is the affirmation that for the White House policy to succeed, federal agencies will need universities to require of their Principal Investigators; “sufficient copyright licensing licensed to enable permanent archiving, access, and reuse of publication”. While sounding simple, in practicality, this means altering university open access and intellectual property policies, and running a substantial educational campaign amongst researchers. This is no small feat.

The timeframe the SHARE proposal puts forward is in phases, with requirement and capabilities developed within 12-18 months, and the supporting software completed within another six months. So there is a two-year minimum period after initiation of implementation before the system would be operational. It is also possible that given the policy issues, it could take longer to eventuate in reality.

There has been less discussion about the SHARE proposal on open access lists, but this is hardly surprising as more energy on these lists will be directed towards criticism of the publishers’ proposal.

So which one will win?

Despite the two proposals emerging within days of one another, the sophistication of both proposals indicates that they have been in development from some time.

Indeed, the CHROUS proposal would have required lead-time to negotiate ‘buy-in’ from the different publishers. On the other hand, the SHARE proposal includes a complex flow chart on page 4 which appears to be the equivalent to the ‘High-level System Architecture’ the CHROUS proposal states would be ready on Friday 14 June. According to a post on the LibLicense discussion list, SHARE was developed without awareness of CHORUS, so it is not an intentional ‘counterattack’.

There are glaring differences between the two proposals. SHARE envisions text and data mining as part of the system, two capabilities missing from the CHORUS proposal. SHARE also provides searching through Google rather than requiring the user to go to the system to find materials as CHORUS seems to be proposing. But as Peter Suber points out: “CHORUS sweetens the deal by proposing OA to the published versions of articles, rather than to the final versions of the author’s peer-reviewed manuscripts”.

So which will be adopted? As one commentator said CHORUS will work because publishers have experience setting up this kind of system, whereas SHARE does not have a good track record in this area. They suggest that.

A cynical publisher might say: Let’s fight for CHORUS, but let’s make sure SHARE wins. Then we (the publishers) have the best of all worlds: the costs of the service will not be ours to bear, the system will work haphazardly and pose little threat to library subscriptions, and the blame will lie with others.

This is an area to watch.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Altmetrics and open access – a measure of public interest

Researchers, research managers and publishers are increasingly required to factor into their policies and practices the conditions by which publicly funded research must be made publicly available. But in the struggle for competitive funding, how can researchers provide tangible evidence that their outputs have not only been made publicly available, but that the public is using them? Or how can they demonstrate that their research outputs have reached and influenced those whose tax dollars have helped fund the research?

Traditional impact metrics

The number of raw citations per paper or an aggregate number, such as the h-index, are indicators of scholarly impact, in that they reveal the attribution of credit in scholarly works to prior scholarship. This attribution is normally given by scholars in peer-reviewed journals, and harvested by citation databases. But they do not provide an indication of public reach and influence. Traditional metrics also do not provide an indication of impact for non-traditional research outputs, such as datasets or creative productions, or of non-journal publications, such as books and media coverage.

Public impact for all types of research outputs could always be communicated as narrative or case studies. These forms of evidence can be extremely useful, perhaps even necessary, in building a case of past impact as an argument for future funding. However, impact narratives and case studies require sources of evidence to support their impact claims. An example of how this can be achieved is in the guidelines for completion of case studies in the recent Australian Technology Network  of universities (ATN)  / Group of Eight (Go8) Excellence in Innovation in Australia impact assessment trial.

One promising source of evidence is the new suite of alternative metrics or altmetrics that have been developed to gauge the academic and public impact of digital scholarship, that is, any scholarly output that has a digital identifier or online location and that is accessible by the web-public.

The advent of altmetrics

Altmetrics (or alternative metrics) was a term aptly coined in a tweet by Jason Priem (co-founder of ImpactStory). Altmetrics measure the number of times a research output gets cited, tweeted about, liked, shared, bookmarked, viewed, downloaded, mentioned, favourited, reviewed, or discussed. It harvests these numbers from a wide variety of open source web services that count such instances, including open access journal platforms, scholarly citation databases, web-based research sharing services, and social media.

The numbers are harvested almost in real time, providing researchers with fast evidence that their research has made an impact or generated a conversation in the public forum. Altmetrics are quantitative indicators of public reach and influence.

The monitoring of one’s impact on the social web is not an exercise in narcissism. Altmetrics enable the creation of data-driven stories for funding providers and administrators. Being web-native, they also facilitate the fleshing out of those stories, by providing links to the sources of the metrics. Researchers can see who it is talking about their research, what they are saying about it, and even how they intend to use it for various scholarly, industry, policy and public purposes. In this way, researchers can find potential collaborators and partners, and gain constructive feedback from those interacting with the research.

Altmetrics also provide a democratic process of public review, in which outputs are analysed and assessed by as many students, researchers, policy makers, industry representatives, and members of the public that wish to participate in the discussion. Altmetrics provide a more comprehensive understanding of impact across sectors, including public impact by publically funded research.

Altmetrics and open access

There is an interesting relationship between altmetrics and open access. One could even refer to altmetrics as open metrics. This is firstly due to the fact that altmetrics data uses open sources. Altmetrics services access and aggregate the impact of a research artefact, normally via an application programming interface (API) made available by the source. Altmetrics services in turn provide APIs for embedding altmetrics into institutional repositories or third-party systems. Secondly, open access research outputs that are themselves promoted via social web applications enjoy higher visibility and accessibility than those published within the commercial scholarly communication model, increasing the prospect of public consumption and engagement.

Altmetrics (also known as article level metrics or ALMs) are seen as complementary to open access. The PLOS Article Level Metrics for Researchers page lists some of these complementarities:

  • Researchers can view and collect real-time indicators of the reach and influence of outputs, and share that data with collaborators, administrators and funders
  • Altmetrics empowers researchers to discover impact-weighted trends and innovations
  • Researchers can discover potential collaborators based on the level of interest in their work
  • High impact datasets, methods, results and alternative interpretations are discoverable
  • Dissemination strategies and outlets can be tracked, evaluated and reported on
  • Evaluation of research is based on the content, as opposed to the container (or journal)
  • Research recommendations are based on collective intelligence indicators

The April/May issue of the ASSIS&T Bulletin contains a special section on altmetrics, in which several articles touch on the complementarity between altmetrics and open access.  These articles show altmetrics:

  • Provide open source social impact indicators that can be embedded into CVs
  • Enable a public filtering system and track social conversations around research
  • Provide evidence of access by countries that cannot afford expensive journals
  • Provide authors with a more comprehensive understanding of their readership
  • Offer repository managers additional metrics for demonstrating the impact of open access
  • Provide additional usage data for collection development and resource planning exercises
  • Provide supplementary impact indicators for internal reviews and funding applications
  • May be used as quantitative evidence of public impact for research evaluation exercises
  • Provide a better reflection of the usage and impact of web-native outputs

The last point is particularly salient. The new web-based scholarly communication model is one of sharing findings as they occur, interaction and evaluation by interested parties, and subsequent conversations leading to future collaborations and revised or new findings. And altmetrics provide us with an understanding of the impact received at each point in the cycle.

Providers of altmetrics

The following services are good places to start to monitor your altmetrics:

Altmetric and ImpactStory both offer free widgets that can be embedded into repositories, and ImpactStory has the further advantage that impact “badges” can be embedded into CVs. Altmetric also offers a free bookmarklet that can be added to your bookmarks and used to get altmetrics on articles with Digital Object Identifiers (DOI)s or identifiers in open databases such as PubMed Central or arXiv. Altmetrics will only work on Chrome, Firefox or Safari. Plum Analytics probably has the widest coverage of altmetrics sources, and is a paid service. Both Altmetric and Plum Analytics offer commercial tools that offer comparative and group reports.

The best way to engage with altmetrics is to jump right in and have a play. You will be amazed at how quick and easy it is to use the tools and start generating metrics for your research outputs.

Repository administrators can embed altmetrics at the article level within institutional repositories to compliment traditional metrics, views and downloads. Some research information management systems, such as Symplectic Elements, that are capable of generating reports on publication activity and impact, also include article level altmetrics alongside traditional citation metrics.

Pat Loria is the Research Librarian at the University of Southern Queensland.
His twitter handle is
@pat_loria.

Walking in quicksand – keeping up with copyright agreements

As any repository manager will tell you, one of the biggest headaches for providing open access to research materials is complying with publisher agreements.

3020135683_41c68d66f7_m

Most publishers will allow some form of an article published in their journals to be made open access. There is a very useful site that helps people work out what the conditions are for a given journal or publisher, called Sherpa RoMEO*.

In many institutions the responsibility for copyright checking is taken by the repository manager (rather than requiring the author to do it), and usually the workflow includes some or all of:

  • Checking Sherpa RoMEO for local journals
  • Consulting (and adding to) an internal database
  • Looking at the journal/conference/publisher webpages
  • Locating and consulting at the Copyright Transfer Agreement the author signed
  • Contacting the publisher directly for permission if the OA position is not able to be determined using any of these resources.

One problem repository managers face is that publishers sometimes change their position on open access. Often there is no public announcement from the publisher; especially when the change imposes more restrictions on ‘green’ open access. This is where the blogosphere and discussion lists (such as the CAIRSS List in Australia) are invaluable in keeping practitioners on top of new issues in the area.

Some recent cases where publishers set more restrictions on ‘green’ open access include Springer and IEEE.

[*SHERPA stands for Securing a Hybrid Environment for Research Preservation and Access, and RoMEO stands for Rights Metadata for Open archiving.]

IEEE

The Institute for Electrical and Electronics Engineers (IEEE) is the biggest organisation for these fields. They run many high status conferences and publish the proceedings. Because in this field traditionally authors have been expected to provide camera-ready copy for conference proceedings, it has long been accepted practice for authors to make copies of their work available on their own webpages or in repositories. And until December 2010 IEEE sanctioned that (as long as the repository attached a specific notice).

Then on 1 January 2011 IEEE changed the rules and said people could no longer put up the Published Version. They were still allowed to put up the Submitted Version (preprint) or the Accepted Version (postprint). The policy is on the IEEE website here. While this still allows IEEE works to be made available in compliance with the recent Australian mandates, a recent blog  argues that the re-use restrictions on the Accepted Version of IEEE publications imposed by IEEE means that the works are not open access in compliance with many overseas mandate requirements.

Springer

Springer also recently changed their rules. They were previously a fully ‘green’ publisher which meant authors were allowed to make their Accepted Versions available immediately on publication. But this has recently changed.

According to their Self-Archiving Policy: “Authors may self-archive the author’s accepted manuscript of their articles on their own websites. Authors may also deposit this version of the article in any repository, provided it is only made publicly available 12 months after official publication or later. …”

So now there is a 12 month embargo on making the Accepted Version available. It would seem that Springer have altered their position in response to the introduction of the RCUK mandate.

Indeed many other publishers have made announcements in response to that mandate. These range in form across videos from BioMed Central to announcements such as from Oxford University Press to a blog post from SAGE.

There is some argument in discussion lists that the new Springer position is contradictory – that institutional webpages are effectively the author’s website, given the way many repositories are embedded with the staff pages for institutions. This simply indicates the complexity of these agreements and how challenging the interpretation of them can be even for people whose work centres in this area.

And this opens up a new, emerging issue.

Separate publisher agreements

So far this blog has been talking about publisher agreements with authors. But some publisher’s agreements state that if authors are publishing research that results from a funder that has an open access mandate, there are different rules. Two very prominent ones have been Elsevier and Wiley. Generally these different rules require a ‘separate agreement’ between the funder and the publisher. There is more information about separate agreements here.

Follow these links to see the arrangements Wiley and Elsevier have made to manage the RCUK mandate.

Emerald

Emerald is another publisher which has recently changed its position on open access, in this case only for deposits which are mandated. For these publications Emerald have recently adopted a 24 month embargo. The text on their site says: “if a mandate is in place but funding is not available to pay an APC [article processing charge], you may deposit the post-print of your article into a subject or institutional repository and your funder’s research catalogue 24 months after official publication”.

Emerald say they are prepared to “work in partnership” to “establish Open Access agreements that support mutual interests”. One such agreement is with the International Federation of Library Associations and Institutions (IFLA) which permits the deposit of an Accepted Version (post print) with only a nine month embargo.

So if an author publishes through Emerald they are subject to one of three possible copyright agreements depending on whether they are researching using funds that have a mandate associated with the funds and whether they are publishing in an IFLA journal.

Library agreements?

To add to the confusion, it appears there is a third form of agreement relating to copyright permissions beyond the copyright transfer agreement the author has signed and any separate agreement that may be in place as a result of a mandate.

It seems that publishers are now approaching libraries directly over the issue of access to publications. That is, they are seeking to sign an agreement directly with the library.

According to discussions online, it seems that there are two types of clauses attached to institutional license agreements – either a new clause in existing contracts at renewal time, or a separate agreement that serves as an addendum to the contract in between renewals. It is unclear whether these agreements would override the copyright transfer agreement the author signed. Having two agreements adds to the confusion and begs the question: which one is binding?

I am not privvy to what is potentially being agreed to in these new clauses. It is almost a moot point. The issue is that if institutions sign these agreements then the waters are further muddied.

Repository managers then potentially have three processes they have to check:

  1. The author’s copyright transfer agreement – using the workflows mentioned above
  2. They need to know if a particular work is the result of a mandate, and if so determine if it is published with a publisher that requires a separate agreement, and establish whether an agreement is in place
  3. They might also need to be on top of the license agreements or extra clauses their library has with individual publishers.

It is complicated and time consuming.

Implications

These changing rules have a potentially profound effect on the rate of the uptake of repositories in some institutions. Repository structures and associated workflows vary dramatically. In some cases the institution maintains one repository for both open access materials and reporting publication databases, others have separate repositories for different purposes.

And there can be big variations in the way publications are recruited for the repository.

In the majority of cases there is an allocated repository manager who takes responsibility for checking copyright compliance of deposited items. But some institutions expect their researchers to do this and to indicate that they have done so when they deposit their papers to the repository. This adds a level of almost insurmountable complexity to what some have argued is a simple matter of a ‘few keystrokes’.

While researchers *should* be aware of the conditions of the copyright transfer agreement they have signed with their publisher, in reality many are not. Often they do not even have a copy of what they have signed. While this oversight can be managed through the use of Sherpa RoMEO (if the researcher is, indeed aware of the service), it is unrealistic to expect an individual researcher to also:

  • know whether their institutional library has signed an external agreement,
  • know whether their work is the result of funding that has a mandate associated with it, and
  • know whether their publisher has a special agreement in relation to that mandate.

These changing copyright arrangements mean that the process of making research openly accessible through a repository is becoming less and less able to be undertaken by individuals. By necessity, repository deposit is becoming solely the responsibility of the institution.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Journal editors take note – you have the power

Some interesting news has come across my desk today, both as an open access advocate and someone who is based in a library.

The editorial board from the Journal of Library Administration has resigned in protest of the restrictive licensing policy imposed by its publisher Taylor & Francis (T&F). Brian Mathews includes the text of the resignation in his blog here

They might not be aware of it, but the editorial board are following in the footsteps of other editorial boards. A webpage put together by the Open Access Directory called Journal declarations of independence  lists examples of “the resignation of editors from a journal in order to launch a comparable journal with a friendlier publisher”. There are 20 journals listed on the pages, with the timeline running from 1989 to 2008.

What is a licensing policy?

For those people new to open access, a quick explainer. This is referring to the restrictions the publisher is imposing on what an author can do with copies of their published work. T&F say on their author pages that authors who have published work in a T&F journal are limited in what they can do with copies of the work:

  • Authors are not allowed to deposit the Publisher’s Version

This is fine – the publisher does manage the peer review process and provide the electronic distribution platform. They also have investment in the layout and design of the page and the manufacture of the downloadable pdf. Most publishers do not allow the Published Version to be made available.

  • Authors are allowed to put a copy of the Submitted Version (this is the version sent to the journal for peer review) into their institution’s web-based repository. In some disciplines this is called the pre-print. T&F rather confusingly call this the ‘Author’s Original Manuscript’.

So far so good – it seems quite generous. But in many disciplines, sharing the Submitted Version is inappropriate because it may contain errors which could reflect badly on the author, or even in some instances be dangerous to be made public without correction.

  • Authors are allowed to put a copy of the Accepted Version (the author’s post-peer reviewed and corrected version) into the institutional repository. T&F call this the ‘Author’s Accepted Manuscript’.

Again this seems generous. But the author can only do this “twelve (12) months after the publication of the Version of Scholarly Record in science, engineering, behavioral science, and medicine; and eighteen (18) months after first publication for arts, social science, and humanities journals, in digital or print form”.

Bear in mind the peer review and amendment process can take many months and there is often a long delay between an article’s acceptance and publication. This means the work is only able to be made open access two to five (or more) years after the original research was done.

This is what the Journal of Library Administration editors were originally protesting about, and then they took exception to the suggestion by T&F that authors could take up the open access ‘option’ for a fee USD$2995 per article. This amount is far beyond the reach of most H&SS scholars.

The lure of the commercial publisher

Talking to stressed, overworked editors it is easy to see why allowing a commercial publisher to take over the responsibility of publishing their journal is attractive.

But there is a catch. For a start, in the conversations I have had to date with journal editors who have ‘sold’ their title to a commercial publisher, it seems there is no exchange of money for ‘goodwill’ in the way there would be for the sale of any other business.

In addition, when a commercial publisher owns a journal title, it means they impose their own copyright transfer agreements – which determine what the authors are able to do with their work. This is often more restrictive than what the independent editorial team was allowing.

But the most dramatic difference to operations when a previously independent journal is bought by a commercial publisher is the amount they charge for subscriptions. For example, the Journal of Australian Studies  has a subscription which comes as part of the membership to the International Australian Studies Association (InASA). Members receive other benefits such as discounts to conferences. It costs AUD105 each year.

But if you consult the journal’s page on the T&F website  the online subscription is USD781 and the Print & Online subscription is USD893.

It is not that T&F are the only ones, mind you. The Journal of Religious History  is published by Wiley. Members of the Religious History Association can join for AUD45, and receive the print and online version of the journal. But subscriptions through Wiley range from USD593 for an institutional Print & Online subscription, to USD76 for a personal Print & Online subscription.

And when you start looking at Wiley’s permissions they are even more restrictive than T&F. Again the author can archive the Submitted Version, but for the Accepted Version there is an embargo of 0-24 months ‘depending on the journal’ and even then written permission from the publisher is required (good luck with that).

So what can journal editors do?

For a start remember that you are crucial to the success of a journal. Publishers rely on their editors absolutely to produce journals, which means you come into negotiations from a position of strength.

So if you are an editor of an independent journal and are considering ‘selling’ your journal to a commercial publisher the issues worth consideration include:

  • What are the restrictions the publisher will place on the re-use of the work published in the journal? Do they align with your current (or intended future) position? Are they prepared to negotiate these with you?
  • What will the subscription cost be to the journal? Does that mean some readers will not be able to afford subscriptions?

If you are the editor of a journal that is currently being published by a commercial publisher:

  1. Check out the restrictions imposed on your authors by looking the journal up in Sherpa/Romeo
  2. If those restrictions do not meet with the philosophy of the dissemination of your journal, consider contacting the publisher to request a less restrictive permissions policy

There is evidence that this has worked in the past. On 1 November 2011, T&F announced a two year pilot for Library and Information Science Journals, meaning that authors published in 35 library and information science journals have the right to deposit their Accepted Version into their institutional repository.

It seems that library journals have a reasonable track record on this front. In March this year- Emerald Group Publishing Limited announced a ‘special partnership’ with the International Federation of Library Associations and Institutions (IFLA). Under this agreement, papers that have their origins in an IFLA conference or project and are published in one of Emerald’s LIS journals can become open access nine months after publication.

Moving your journal to an online open access platform

If you are the editor of an independent journal and you are considering moving online, some questions to consider include:

  • Who is your readership and how do they read the journal? In some cases the journal is read in lunchrooms in hospitals for example, so the printed version is necessary
  • Can the journal go exclusively online and assist readers by providing an emailed alert for each issue?

There are many tools to assist journal editors manage the publication process. The Open Journal System (OJS) was developed by the Public Knowledge Project, and is an open source (free to download) program to manage journals.

Australian universities host many open access journals (listed here) with a considerable portion published using OJS. Most of these journals are run with some subsidy from the institution, and do not charge authors article processing charges. From the researcher’s perspective they are ‘free to publish, free to read’.

In addition, the National Library of Australia runs the Open Publish program which hosts many open access journals.

If you have questions about this and want to know more please leave a reply to this post.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Centrally supported open access initiatives in Australia

Australia has a good track record in relation to open access, from hosting one of the first country-wide thesis repositories in the world to supporting the development and management of institutional repositories. While initially much of this work was pioneered by the university libraries, the Australian Government has made significant commitments more recently.

This blog post gives a short rundown of some of the open access initiatives Australia has seen since 2000, starting with the most recent developments – open access mandates from the two main funding bodies.

Funding mandates

In 2012 the National Health and Medical Research Council (NHMRC) announced its revised policy on the dissemination of research findings, effective 1 July 2012. The Australian Research Council (ARC) released its Open Access Policy on 1 January 2013. Both policies require that any publications arising from a funded research project must be deposited into an open access institutional repository within a 12 month period from the date of publication.

There are two minor differences between the two policies. The NHMRC relates only to journal articles where the ARC encompasses all publication outputs. In addition, the NHMRC mandate affects all publications as of 1 July 2012, but the ARC will only affect the outputs produced from the research funded in 2013. Researchers are also encouraged to make accompanying datasets available open access.

Enabling open access

Both the NHMRC and ARC mandates specifically require deposit of metadata (and ideally full text of the work) into the researchers’ institutional repository. This position takes advantage of the existing infrastructure already in place in Australian institutions.

All universities in Australia host a repository, many of them developed with funds the government provided through the Australian Scheme for Higher Education Repositories (ASHER). This scheme which ran from 2007-2009 was originally intended to assist the reporting requirement for the Research Quality Framework (RQF) research assessment exercise, which became Excellence in Research for Australia (ERA). The ASHER program had the aim of “enhancing access to research through the use of digital repositories”.

Australian repositories run on software platforms ranging from EPrints, DSpace, ARROW (a VTLS commercial front end to Fedora), to ProQuest Digital Commons (bepress). A full list of repository software platforms for Australian universities is here.

Support for open access in Australia

Repositories in Australia are generally managed by libraries and have been supported by an ongoing organised community. In 2009-2010, the Council of Australian University Librarians (CAUL) established the CAUL Australian Institutional Repository Support Service (CAIRSS) and when central government funding for the service ended, the university libraries agreed to continue the service by supporting it with member contributions. CAIRSS ended in December 2012, however the email list continues a strong community of practice.

In October 2012 the Australian Open Access Support Group launched, commencing staffed operations in January 2013. The group aims to provide advice and information to all practitioners in the area of open access.

Open theses

Historically Australia has a strong track record in providing access to research. The Australasian Digital Theses (ADT) program began in 2000 as a system of sharing PhD theses over the internet. The ADT was a central registry and open access display of theses, which were held in self-contained repositories at each university using a shared software platform that had been developed for the purpose. The first theses were made available in July 2000.  In 2011, as all theses were then being held in universities’ institutional repositories, the ADT was decommissioned. It was estimated that the number of full text Australian theses available in repositories at the time was over 30,000.

Open data

The Australian Government has made a significant commitment to the development of a successful digital economy underpinned by an open government approach, aimed at providing better access to government held information and also to the outputs of government funded research.

The Australian National Data Service (ANDS) is federally funded to the tune of tens of millions of dollars. It has responsibility for supporting public access to as much publicly funded research data as can be provided within the constraints of privacy, copyright, and technology. In an attempt to provide a platform for sharing information about data, ANDS has developed a discovery service for data resulting from Australian research, called Research Data Australia, which is a national data registry service meshing searchable web pages that describe Australian research data collections supplementing published research. Records in Research Data Australia link to the host institution, which may (or not) have a direct link to the data.

Open government

The work of ANDS reflects the broader government position in Australia of making public data publicly available. The Declaration of Open Government  was announced on July 16, 2010. This policy position is in the process of practical implementation across the country, providing access to information about locations of government services, for example. The level of engagement between government areas and different levels of government varies.

Another government initiative has been the Australian Governments Open Access and Licensing Framework (AusGOAL) which has an emphasis on open formats and open access to publicly funded information and provides a framework to facilitate open data from government agencies. In addition to providing information and fora for discussion, it has developed a licence suite that includes the Australian Creative Commons Version 3.0 licences.

Other publicly funded institutions in Australia also share their research through repositories. The Commonwealth Science and Industry Research Organisation (CSIRO) has a Research Publications Repository. In addition, some government departments are making their research available, such as the Australian Institute of Family Studies and the Australian Institute of Health and Welfare.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Recent US developments in open access

Welcome to the Australian Open Access Support Group blog. We hope this will be a place to explore some ideas and happening in open access in Australia. Of course we live in a global world, so it is important to understand what is happening elsewhere and how this might affect us here.

And things certainly are happening.

US Policy – Increasing Access to the Results of Federally Funded Scientific Research

On February 22, the Obama Administration released a new policy “Increasing Access to the Results of Federally Funded Scientific Research“ that talks about the benefit to society for having open access to government data and research. It requires that within 12 months Federal agencies that spend over $100 million in research and development have to have a plan to “support increased public access to the results of research funded by the Federal Government”.

The policy is clear that it incorporates both scientific publications and digital scientific data, and limits embargo periods to twelve months post-publication.

The policy has had an instant effect, at least in registering policies. Steven Harnad yesterday posted an increase of 24 policies to ROARMAP (which lists open access policies) within four days of the policy being announced.

Similarities with Australian mandates

The interesting thing from the Australian perspective is this policy appears to mirror the NHMRC  and ARC policies in that it requires research metadata to be put in a repository.

The policy requires “Ensure full public access to publications’ metadata without charge upon first publication in a data format that ensures interoperability with current and future search technology. Where possible, the metadata should provide a link to the location where the full text and associated supplemental materials will be made available after the embargo period”.

Given the policy provides a series of suggestions about where repositories ‘could’ be housed, it seems the repository infrastructure in the US is less developed than in Australia. Presumably the repositories could be a way of monitoring progress, although the policy indicates that monitoring will be through twice yearly reports the agencies will have to provide for two years after their plan becomes effective.

Differences with the Australian mandates

While the intent of the policies are similar, the US policy relates only to larger Federal agencies (which may include some universities – note their higher education and research funding model is very different to Australia).

It is also a policy that asks the agencies to develop a *plan* to open up access within 12 months, so we might not see action for some time. Experience has shown setting up open access technology and work processes can be time consuming.

Something that strikes me as interesting is the US policy states that the material to be made open access – needs to be in a form that allows users to “read, download, and analyze in digital form”. This relates to the concept of text or data mining, a subject of many discussions recently. Indeed some people argue that if an item cannot be text or data mined then it is not actually open access. One of the big proponents of text and data mining is Cambridge University chemist Peter Murray Rust.

You cannot textmine a pdf. And the vast majority of work in Australian repositories, at least, are pdfs. This issue is something to watch into the future.

Odd components of the policy

The embargo period of 12 months doesn’t appear to be set in stone. I am unsure what this paragraph means in practice: “provide a mechanism for stakeholders to petition for changing the embargo period for a specific field by presenting evidence demonstrating that the plan would be inconsistent with the objectives articulated in this memorandum”.

Given that ‘stakeholders’ include publishers, then I’m sure they could produce ‘evidence’ that somehow will support the argument that making work available does not benefit society.

Another puzzling statement is: “Agency plans must also describe, to the extent feasible, procedures the agency will take to help prevent the unauthorized mass redistribution of scholarly publications.”

I’m not sure what that means. Isn’t making something openly accessible ‘mass distribution’? And surely having proper license restrictions on making work open access – like Creative Commons  licenses – will resolve how material should be redistributed? The scholarly communication norms require attribution within other scholarly articles, regardless of the distribution method. So this statement strikes me as completely at odds with the reminder of the document.

People power

The Increasing Access to the Results of Federally Funded Scientific Research policy is partially a result of a ‘We the People’ petition in May 2012 which received 65,704 signatures, more than double the required 25,000 signatures in 30 days that means the petition will be considered by the White House. As an interesting aside, in mid January the rules were changed so the petitions need 100,000 signatures before receiving an official response from the White House.

This policy is NOT the same thing as the FASTR

It is easy to get this mixed up. The Fair Access to Science and Technology Research Act (FASTR)  was introduced in both the House of Representatives and the Senate in mid February. It follows from the three previously unsuccessful attempts to get the Federal Research Public Access Act (FRPAA) passed.

FASTR is similar to the new Increasing Access to the Results of Federally Funded Scientific Research policy in that it is also restricted to agencies with research budgets of more than $100 million and it requires placement of work in a repository in a form that allows for text or data mining. It differs in that it has an embargo of only 6 months.

The Bill has not been passed through the legislative system in the US, and there are some activities online  that encourage people to support the Bill. The Association of American Publishers have described the FASTR as “different name, same boondoggle” and as “unnecessary and a waste of federal resources”.

Not everyone is cheering

Mike Eisen, an editor and founding member of PLoS argues that the Increasing Access to the Results of Federally Funded Scientific Research policy represents a missed opportunity  – the thrust of his argument is that the 12 month embargo on the 2008 NIH mandate was seen by some open access activists as a starting point which would reduce over time. But this new policy has cemented the 12 month embargo across the whole of government.

He is specifically angry that the government was so successfully lobbied by the publishers, saying the authors of the policy fell for publishers’ arguments “that the only way for researchers and the public to get the services they provide is to give them monopoly control over the articles for a year – the year when they are of greatest potential use.”

If the publishers have been successful in their lobbying, it might explain why the Association of American Publisher’s response to the policy was almost the polar opposite to their response to (the very similar) FASTR. The AAP have said the policy is very positive, saying it was a “reasonable, balanced resolution of issues around public access to research funded by federal agencies”. Interesting.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group