You are more than your FoR code

Photo by Coley Christine Catalano - http://coleyslocket.com/ (Sourced from unsplash.com)

Photo by Coley Christine Catalano – http://coleyslocket.com/
(Sourced from unsplash.com)

Do you publish in books and journals that you think are best for your work?

While this may come across as a dense question, it’s a live and thorny issue for many scholars who are caught in national ‘research quality’ metrics that rank publications, particularly journals.

@thesiswhisperer commented recently that “[c]lassifying my publications by FOR code makes me look like a person who can’t make up their mind what they want to do”.

The title for this post paraphrases @jod999, who responded wisely with: “Your success says a lot more than your #FoR codes. Just keep doing what you do.”

If you haven’t yet encountered a national research quality exercise, I have two things to say to you:

  1. Congratulations – you still walk in the light; and
  2. If you’re hoping to hang in academia for a bit, read on to work out how you might negotiate these research quality systems when they cross your radar.

Research quality exercises are created as standardised, supposedly objective modes of measuring the quality of research being produced by research organisations (and, down the ladder, by individual researchers).

The systems are also constantly embroiled in passionate debate about their viability, accuracy, and scope. Is research output the best way to measure research quality? Dare we talk about research impact? What do citations really measure about a piece of work? How much ‘gaming’ of the system, for its own sake, takes place?

READ MORE

Advertisements

FoRs and the Alleged “Gaming” of ERA

Michelle Duryea, ECUMichelle Duryea has been the Manager of Research Quality and Policy at Edith Cowan University for over four years. She is responsible for the University’s ERA and HERDC submissions, research management systems, research policy development, reviews of research centres and research performance evaluation, reward and reporting.

Prior to working at ECU, Michelle was the Associate Director of Research Evaluation at the Australian Research Council, directly involved in developing the original ERA policy and guidelines.

Before working for Government, she occupied the position of the Senior Policy Officer for the Australian Technology Network of Universities. Michelle is on Twitter as @MishDuryea


Numbers (Photo by supercake: http://www.flickr.com/photos/supercake)

Numbers (Photo by supercake: http://www.flickr.com/photos/supercake)

Following a recent Research Whisperer post on ‘What’s a FoR?’, a Twitter conversation arose regarding the way Field of Research codes are assigned to publications for Government reporting purposes, specifically ERA (Excellence in Research for Australia).

This post is a product of that discussion and aims to clarify the role research offices play in preparing a university’s ERA submission.

More importantly, I also discuss the question of the impact these activities may or may not have on the publication practices of individual researchers.

In order to have a legitimate appreciation for what is involved in preparing the data about research outputs for an ERA submission, there are some fundamental ERA concepts which need to be understood initially, and which I will attempt to explain as clearly as possible.

Out of necessity, the following is taken largely from the ERA 2012 Submission Guidelines, so I apologise in advance for the “policy speak” but hopefully you’ll hang in there long enough for us to get to the good bit.

READ MORE

ERA: The good, the bad, and the ugly

Associate Professor Peter Macauley (RMIT University)

Associate Professor Peter Macauley teaches in the information management programs at RMIT University. Before starting at RMIT, he worked for 30 years in public, special and university libraries.

Over the past decade Peter’s research has focused on doctoral pedagogy, knowledge production, information literacy, scholarly communication and distance education.

With colleagues, he has been awarded ARC funding for two Discovery projects: ‘Australian doctoral graduates’ publication, professional and community outcomes’, and ‘Research capacity-building: the development of Australian PhD programs in national and emerging global contexts’. He publishes regularly in journals best suited to the readership for his research; some happen to be ERA-ranked A and A* on the 2010 list.

The Research Whisperer knows Peter as one of the good guys: a researcher with integrity and perspective, who tells it like it is. 


Problematica (Photo by Tseen Khoo)

ERA, which stands for ‘Excellence in Research for Australia’, is similar in many ways to research frameworks used in other countries to evaluate the quality (and sometimes quantity) of the research output of universities and—indirectly—individuals.

In the United Kingdom, they have REF (the Research Excellence Framework); in New Zealand, it is the PBRF (Performance Based Research Fund), and many other countries have similar schemes.

In this post, I focus on the journal ranking component of ERA.

Officially, the ERA journal rankings were abandoned after the first round of evaluation in 2010. Unofficially, the ERA journal rankings are alive and well and used for all the reasons they were withdrawn: job applications, promotions, grant applications and other forms of peer review (the bedrock of academe).

READ MORE