Research methods vs approaches

Portrait of Jonathan LaskovskyJonathan Laskovsky is the Senior Coordinator, Research Partnerships, in the College of Design and Social Context at RMIT University. He is primarily responsible for managing research partnerships support and administration within the College.

Alongside this role, Jonathan has research interests in modern and postmodern literature with a particular focus on fictional space and critical theory.

He tweets as @JLaskovsky and can be found on Linkedin.

Method Man (aka Clifford Smith) performing at Shattuck Down

Method Man, by Alyssa Tomfohrde from Oakland, USA, CC BY 2.0.

I am a Method Man. No, this does not involve being part of the Wu-Tang Clan. I’m not even referencing the fact that most university researchers exist in a paradigm easily summarised by Wu-Tang’s most famous line: Cash Rules Everything Around Me (C.R.E.A.M.).

I mean that when I read your research application, I take a very close look at your research methods.

This is, in part, driven by systemic behaviour of reviewers who are prone to attacking the methodology of research grants. Anecdotally, this is understood as a ‘neutral’ ground (it is less personal than attacking the track record of the applicant) and, thus, less likely to cause offence while still enabling the reviewer to kill the application. Enabling the reviewer to become a kind of Ghostface Killah.

Yet those same reviewers may be onto something. Quite often the methodology is a grant application’s greatest weakness. Read more of this post


How to write a simple research methods section

Photo by Mel Hattie |

Photo by Mel Hattie |

Yesterday I read a research application that contained no research methods at all.

Well, that’s not exactly true.

In an eight-page project description, there were exactly three sentences that described the methods. Let’s say it went something like this:

  • There was to be some fieldwork (to unspecified locations),
  • Which would be analysed in workshops (for unspecified people), and
  • There would be analysis with a machine (for unspecified reasons).

In essence, that was the methods section.

As you might imagine, this led to a difficult (but very productive) discussion with the project leader about what they really planned to do. They knew what they wanted to do, and that conversation teased this out. I thought that I might replicate some of that discussion here, as it might be useful for other people, too.

I’ve noticed that most researchers find it easy to write about the background to their project, but it’s much more difficult to have them describe their methods in any detail.

In part, this is a product of how we write journal articles. Journal articles describe, in some detail, what happened in the past. They look backwards. Research applications, in contrast, look forwards. They describe what we plan to do. It is much harder to think about the future, in detail, than it is to remember what happened in the carefully documented past.

As a result, I often write on draft applications ‘less background, more methods’. Underlying that statement is an assumption that everybody knows how to write a good methods section. Given that people often fail, that is clearly a false assumption.

So, here is a relatively simple way to work out what should go into your methods section.


Lazy sampling

An old wooden post with grass growing out the top.

Old post, by Jonathan O’Donnell on Flickr

There are lots of different sorts of sampling techniques (both random and non-random) and a myriad of books that explain the best one to use for any given methodology. ‘Snowball sampling‘ is my favourite – you start with one person and ‘sample’ them, then you ask them who you should talk to next, and so on. I like the convenience (when it is used well), and I love the name.

It seems to me that lots of people use a technique that I don’t like at all.

Let’s call it “Lazy sampling”.

“Participants in the study were 35 undergraduate students (24 women, 11 men) aged 18 to 26, recruited from a large university in [the area where the authors work]. We recruited participants in the [sociology] lab at the main campus and many received extra credit in their courses for participating in the study.”
[Information obscured so as not to embarrass the authors]

Just to be clear – this was not an educational research paper. It wasn’t talking about pedagogy or course development. It wasn’t a study about tertiary education. They were looking at a general social issue, using university students as their sample.

In selecting their sample, the researchers made the following decisions:

  • They drew their sample from the University where they work, or (at best) a university nearby.
  • They drew their sample from one laboratory on one campus.
  • They either constrained the age of their sample so that only students would be selected, or they defined their age range after they had seen the ages of the students.
  • They either constrained their sample to students or they worked within a reward structure that resulted in only students applying. No admin staff, no faculty staff, no visitors to the laboratory…
  • They either didn’t care about gender or they didn’t try to recruit for even numbers.

Why? Why would anybody constrain their sample so tightly? One laboratory – what is that about ? Why would anyone exclude staff or visitors to the university? As a researcher, why wouldn’t you step outside the university and recruit from the local town or city? Wouldn’t it make your study stronger? Any expansion of the sample would have made this paper more interesting.

I don’t like this lazy sampling. I don’t like it at all. It really disappoints me when I open a promising article, only to find that the sample was this small and constrained.

Putting aside my own personal disappointment, lazy sampling is poor research.


What’s your discipline?

The author in outrageous eyelashes and lipstick.

Check those lashes! by Jonathan O’Donnell on Flickr

I was sitting in a workshop a while back (actually, it was a lecture, but nobody calls them ‘lectures’ when staff are attending), and an eminent professor used the phrase ‘cross disciplinary’.

“That’s pretty retro,” I thought. “Everybody talks about being ‘multi-disciplinary’ or ‘inter-disciplinary’. Actually, even that is a bit passé. ‘Trans-disciplinary’ is the word of the day. What the hell do these terms mean, anyway?”

Given that most of my working life consists of writing:

‘Be precise’,
‘What exactly do you mean?’, and
‘Reword for clarity’

in the margin of draft grant applications, I thought that I should come up with some working definitions, at least for my own satisfaction.

After all, these words are fundamental to our conception of modern research. They deserve precise definitions.


The nitty and the gritty

Untitled (stool for guard) by Taiyo Kimura at MONA, Tasmania

'in a dark place' by Jonathan O'Donnell on Flickr

How did you feel when you finished your last grant application? Let me guess: you were exhausted, running late, and sick of the sight of the thing.

Hold onto that feeling. Embrace it. Understand it.  Because that might have been how your assessors felt when they were reading your application.

They probably had a stack of applications too high to jump over, and reading them in the gaps between the rest of their life (i.e. late at night, when they were traveling, when they were tired). Unless you were very lucky, they might have been reading them piece-meal, a bit at a time. And they would have been rushing, because they needed to get their reports back in time and there were other things calling on their time.

As a result, they might not have been in top form when they reviewed your application.

But that’s enough about them; let’s get back to you!

“This application needed a good proof-read before being submitted.”

When you get a comment like that, it isn’t much comfort to remember that you only just submitted the application in time, an hour or so before the deadline. However, that is exactly the sort of thing that assessors say in their reports. Given that you’re asking for several hundred thousand dollars’ worth of funding, they have a point.

When an assessor is tired, or disappointed, or feels that you are wasting their time, they can often become quite critical.