FoRs and the Alleged “Gaming” of ERA

Michelle Duryea, ECUMichelle Duryea has been the Manager of Research Quality and Policy at Edith Cowan University for over four years. She is responsible for the University’s ERA and HERDC submissions, research management systems, research policy development, reviews of research centres and research performance evaluation, reward and reporting.

Prior to working at ECU, Michelle was the Associate Director of Research Evaluation at the Australian Research Council, directly involved in developing the original ERA policy and guidelines.

Before working for Government, she occupied the position of the Senior Policy Officer for the Australian Technology Network of Universities. Michelle is on Twitter as @MishDuryea. Her ORCID is 0000-0001-9514-6499


Numbers (Photo by supercake: http://www.flickr.com/photos/supercake)
Numbers (Photo by supercake: http://www.flickr.com/photos/supercake)

Following a recent Research Whisperer post on ‘What’s a FoR?’, a Twitter conversation arose regarding the way Field of Research codes are assigned to publications for Government reporting purposes, specifically ERA (Excellence in Research for Australia).

This post is a product of that discussion and aims to clarify the role research offices play in preparing a university’s ERA submission.

More importantly, I also discuss the question of the impact these activities may or may not have on the publication practices of individual researchers.

In order to have a legitimate appreciation for what is involved in preparing the data about research outputs for an ERA submission, there are some fundamental ERA concepts which need to be understood initially, and which I will attempt to explain as clearly as possible.

Out of necessity, the following is taken largely from the ERA 2012 Submission Guidelines, so I apologise in advance for the “policy speak” but hopefully you’ll hang in there long enough for us to get to the good bit.

The Rules of the Game

For the purposes of ERA, institutions must assign at least one – and up to a maximum of three – 4-digit FoRs to each research output with a percentage apportionment for each code. The FoRs assigned should describe the discipline(s) of the research contained within the output. Institutions may assign any 4-digit FoR codes to research outputs, with the exception of journal articles. In the case of journal articles, institutions may assign to the article only those FoR codes identified for the journal concerned in the ERA 2012 Journal List. It should also be noted that some journals are listed against two-digit FoR codes and others are identified as “multidisciplinary”; both scenarios provide options in terms of which 4-digit codes can be assigned to the articles concerned

In the case of journal articles that have significant content (defined as 66% or more) that could best be described by a different FoR code, institutions may assign that FoR even if the journal list does not assign that code to the journal in which the article was published; this is what is called the “reassignment exception”.

The next important concept to understand is the ERA “low volume threshold”. For a Unit of Evaluation, or FoR, to be evaluated in ERA for a given institution, it must meet a minimum research output volume threshold. For disciplines where citation analysis is used as part of the evaluation, the low volume threshold is 50 apportioned citation database indexed journal articles. So, if the number of apportioned indexed journal articles over the six year reference period is fewer than 50, no evaluation is conducted for that FoR for that institution.

Similarly, for disciplines where citation analysis is not used (aka “peer review disciplines”), the low volume threshold is the equivalent of 50 apportioned research outputs of any type (i.e. it’s not just limited to journal articles, but includes all other publications and non-traditional output types, with books given an effective weighting of 5:1). Where a research output has been apportioned across more than one FoR, the contribution towards the low volume threshold is calculated on the basis of the percentage apportionment.

Where the low volume threshold is not met for an FoR at an institution, that Unit of Evaluation is publicly reported as ‘not assessed’. Therefore, the institution is not considered by ERA as ‘research active’ for that FoR. Importantly, it should be noted that all outputs are still reported, given the rules also require comprehensive reporting of all research activities, but not all FoRs are directly evaluated by the ERA Evaluation Committees and awarded a quality rating. All data submitted for ERA (which also extends beyond research outputs data) is still included in national aggregated reporting.

ERA Submission Preparation Activities

There are various ways and means by which university research offices collect their base research outputs data. At my current institution, we require that researchers annually report their publications and other outputs via our research management system. We ask that, when entering the details, they identify the relevant FoRs they would assign to the output and we do not put any restrictions on which codes they may want to choose, other than limiting the number of codes to three 4-digit FoRs.

When beginning to prepare the outputs data for the six-year ERA reference period, we first need to analyse the codes originally assigned by researchers to determine whether they meet the submission rules, as outlined above. We are particularly concerned with journal articles, given their unique restrictions and the fact that they usually make up the bulk of the outputs submitted. Should the FoR coding of journal articles not comply with the ERA rules, we need to make a call regarding how best to code the articles and there are many considerations which inform that decision making process. At my university, we also consult senior academic staff to assist us with this process. Ensuring that we meet the minimum threshold in FoRs for which we claim to be research active would be high on the agenda, for example.

Given ERA assesses disciplines as a whole and not individual researchers, there is the hope and aim that the excellent research being undertaken within the institution is reflected appropriately within the rules and constraints outlined. This could simply be called the “strategic reassignment” of FoRs, but some feel that it is actually inappropriate gaming of the system.

Gaming Allegations

In an article in The Australian earlier this year (ARC flags more transparency, 13 June 2013), following the release of an NTEU report (Impact of ERA Research Assessment on University Behaviour and their Staff, April 2013), the ARC CEO Professor Aidan Byrne was quoted as saying that he didn’t believe universities were operating outside the rules of ERA. This was in response to concerns about ERA gaming which, in essence, centred on universities coding research into inappropriate fields, thereby boosting the performance of one FoR and/or hiding areas of low performance by ensuring they fall below the low volume threshold, thereby avoiding scrutiny.

However, according to ARC Executive General Manager Leanne Harvey, who was also quoted in the piece, only six per cent of the research outputs submitted for ERA 2012 were coded somewhere other than where you would initially think they would be. Ms Harvey went on to say that, given the multidisciplinary nature of research, universities had legitimate choices to make over where to code certain research outputs and, from my experience, I would agree with those sentiments.

ERA Impact on Researchers

I appreciate the concerns raised by the NTEU, but I believe the issues raised in their report and other articles, such as Survival skills: the impact of change and the ERA on Australian researchers (Michael Hughes & Dawn Bennett, 2013, ‘Higher Education Research & Development’, 32:3, 340-354), refer more to research management practices that are aimed at maximising future ERA outcomes beyond the pointy end of submission preparation activities.

While I cannot speak to whether such practices are occurring in certain institutions, the fact remains that with every Government policy there are always elements of unintended or even perverse outcomes. It would be naïve of the Government to expect that this wouldn’t occur on some level. The fact remains that we still can’t predict the outcomes of the ERA evaluation process, or even fully understand exactly what happens behind those closed-door panel discussions, so any attempts to maximise ERA outcomes may ultimately be futile.

I maintain the view that researchers should be focussed on publishing in the most appropriate outlet that will reach the right audience for the dissemination of their research.

While it’s useful to have an understanding of the rules of ERA, the nitty-gritty of the process of preparing FoR codings and allocations are really the purview of ERA research administrators. These number crunchers, myself among them, are focussed on ensuring that your research is reflected in the best possible light, within the constraints of the game, hopefully for the benefit of your discipline(s), and the institution as a whole.

Or am I also being a little naïve?

4 comments

  1. Thanks a lot for your blog post, very informative and I have way too many further questions. We often hear about individual academics, departments or universities “gaming” systems of research evaluation, but actual evidence of widespread gaming (or any gaming) is rarely supplied.

    If universities were gaming the FOR codes to bring weak fields under the volume threshold, wouldn’t this be pretty obvious in terms of the number of FORs they are active in? In 2012, four unis were active in all 22 FORs, and a further 9 were active in 20-21 of the 22 FORs. So, transferring publications out of the inactive fields within these unis would have to be a rather limited activity. Also, I expect it would be quite an embarrassment for a large university to have a large number of students and academic staff in fields where they were research inactive. Has anyone created a table showing this? It could be quite a shameful list.

    Personally, I don’t think you are being naive about academic publishing. Academics who are research active already know which journals, conferences and publishers are reputable. The former ranking system was controversial, but I can’t imagine too many reputable journal outlets are missing on the 24,000+ ERA list. The FOR system you describe seems appropriate, but I doubt many academics take much notice of it. Individual reputation is more important than university or departmental reputation. It all seems fine, so long as managers do not intervene and reallocate an FOR from what the author declares to something well beyond what the journal is considered to publish.

    One thing which I occasionally heard was that academics were discouraged from publishing in the formerly ascribed “C” ranked journals, to the extent that apparently they (and the uni) were better off not publishing at all than having such publications on their list. Was this a direct result of how ERA was being used by universities or the government? Was this just a method to bring FORs under the threshold or was it about gaming the average quality of publication within an FOR? Is it still advisable for early career academics (or others) to avoid publishing in certain channels?

    Like

  2. Thanks for your comments Peter. A comparison of active Fields of Education versus FoRs could be an interesting exercise; I’m not aware of the existence of such an analysis, although the Bradley Review certainly encouraged a better alignment of research and teaching strengths.

    Reallocation of FoRs, if different from what was originally declared by the author(s), should still make sense in terms of the content of the article. This is particularly important in “peer review disciplines” given we nominate 30% for that peer review evaluation and these are directly accessed and assessed by evaluators via institutional repositories. It would be pretty obvious to these discipline experts, in my opinion, if an output was assigned to a nonsensical FoR.

    I believe the old journal rankings were abolished by the (then) Minister because of concerns regarding the way some institutions were using the list to drive inappopriate publishing behaviours, although I am aware that some disciplines continue to maintain their own lists!

    I believe your last point possible relates to a small disclaimer under the ERA Rating Scale in the ERA Evaluation Handbook (which you can find on the ARC website) which says that “in order to achieve a rating at a particular point on the scale, THE MAJORITY of the output from the UoE [FoR] will normally be expected to meet the standard for that rating point” (emphasis added). I, nevertheless, am sticking to the advice given to individual academics as stated in my post.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.