Research methods vs approaches

Portrait of Jonathan LaskovskyJonathan Laskovsky is the Senior Coordinator, Research Partnerships, in the College of Design and Social Context at RMIT University. He is primarily responsible for managing research partnerships support and administration within the College.

Alongside this role, Jonathan has research interests in modern and postmodern literature with a particular focus on fictional space and critical theory.

He tweets as @JLaskovsky and can be found on Linkedin.


Method Man (aka Clifford Smith) performing at Shattuck Down

Method Man, by Alyssa Tomfohrde from Oakland, USA, CC BY 2.0.

I am a Method Man. No, this does not involve being part of the Wu-Tang Clan. I’m not even referencing the fact that most university researchers exist in a paradigm easily summarised by Wu-Tang’s most famous line: Cash Rules Everything Around Me (C.R.E.A.M.).

I mean that when I read your research application, I take a very close look at your research methods.

This is, in part, driven by systemic behaviour of reviewers who are prone to attacking the methodology of research grants. Anecdotally, this is understood as a ‘neutral’ ground (it is less personal than attacking the track record of the applicant) and, thus, less likely to cause offence while still enabling the reviewer to kill the application. Enabling the reviewer to become a kind of Ghostface Killah.

Yet those same reviewers may be onto something. Quite often the methodology is a grant application’s greatest weakness. Read more of this post

A tale of two interviews

Our anonymous author approached the Research Whisperers with this post about disrupted research interview expectations and the importance of approaching these encounters with an open mind. This is a lesson at the heart of all research, but it can be easy to build up presumptions around our skills and expertise. 

Having our intellectual expectations upended can be confronting and frustrating, but it can also be enlightening about the topic and ourselves.


Speech bubbles at Erg by Marc Walthieu | flickr.com | Shared via CC BY-NC-ND 2.0

Speech bubbles at Erg by Marc Walthieu | flickr.com | Shared via CC BY-NC-ND 2.0

I’m conducting interview-based research on a complex social problem in Australia, and I had the opportunity to interview a woman I’d been looking forward to meeting for some time.

She was the CEO of an organisation that delivers services to a marginalised group, which was an important perspective for one of my case studies. I knew she held some views on my research topic that were very similar to my own, which can help with rapport.

I expected it to be a positive experience.  It ended up being the most uncomfortable interview I’ve ever conducted, for this or any other project.

The awkwardness started before we even sat down, when I held out my hand in greeting and she (let’s call her P01) went in for a kiss on the cheek. Not the way research interviews in office settings are usually kicked off! Once we were seated, one of her first comments was that she doesn’t normally participate in research interviews, so I should feel lucky, and that she had agreed to do it this time because she could tell that I supported the organisation’s work. Also, could we keep it to 40 minutes? I assured her that very much appreciated her time, and quietly panicked at how unwelcoming this little exchange felt.

I started asking my questions, which she answered quite briefly and directly, occasionally chuckling or waiting for me to reframe my words into an actual question before responding. In my experience, participants usually respond at length at the mere mention of a topic (without necessarily waiting for me to ask a question), so muted alarm bells continued to ring.  Read more of this post

Second time around

more-detail-2Yesterday, I was providing advice to a researcher for a grant application resubmission.

You know how it goes: they had put something in last year, it had been reviewed, then rejected. I offered to have a look at it, to treat it as a first draft for this year’s application round.

It turned out that I thought that the researcher needed to:

  • Clarify the core research question,
  • Cut back on the background, and
  • Flesh out the project plan.

This is pretty standard. I tell people this a lot!

I’m thinking of getting a ‘Detail! Detail! Detail!’ t-shirt made up.

Read more of this post

Recruiting research participants using Twitter

Andrew GloverAndrew Glover is a Research Fellow at RMIT University, based in the Digital Ethnography Research Centre and the Beyond Behaviour Change Group.

He is interested in sustainability, air travel, and remote collaboration. He tweets at @theandrewglover.


Recruitment for research participants is often time-consuming work.

Emailing people directly can be effective, but does seem intrusive at times, given the amount of email many of us deal with on a daily basis.

Sometimes, you just want to get your message out there as far and wide as possible, beyond your personal and professional networks.

If you cannot join the Army - Try & get a Recruit

British WWI Recruitment Poster, by State Records NSW on Flickr

Recently, I’ve used Twitter to recruit survey and interview participants for two projects.

The first was an online survey about academic air travel in Australia, and the second was a call for interviews with people who collaborate remotely without travelling. In both cases, I’ve been impressed by the extent to which the message was distributed across the networks of people I was hoping to reach. The air travel survey was completed by over 300 academics throughout Australia, with respondents from every broad field of research. I combined this with emailing universities and academic associations directly, asking them to pass the message on to their staff and members. For the project on remote collaboration, I had 13 people respond immediately who were willing to be interviewed, including from Australia, New Zealand, Europe, and the USA. Read more of this post

Mixed-up methods

A street sign leading to five different destinations, in Chinese and English.

Where to?, by Jonathan O’Donnell on Flickr.

I’ve been seeing a lot of applications lately where the methods section starts something like this:

In this project, we adopt a mixed methods approach…

It is a statement that I’m coming to loathe, because the author is generally saying:

Warning: muddled thinking ahead.
In the following section, we are going to provide a grab-bag of methods. Their connection to the research question will be weak, at best. Hell, they probably won’t even be well linked to each other…

There are no mixed methods

In a grant application, the purpose of the methods section is to show how you’re going to answer your research question. Not explore the question. Not interrogate the space. Not flail about without a clue.

Your methods should present a road-map from not-knowing to knowing. If you are using more than one method (and, let’s face it, who isn’t?), you need to show that your methods will:

  1. Work (get you to your goal); and
  2. Link together (be greater than the sum of their parts).

You need to show me both of those things, not just one of them (or, as is sometimes the case, neither of them).

Read more of this post

How to write a simple research methods section

Photo by Mel Hattie | unsplash.com

Photo by Mel Hattie | unsplash.com

Yesterday I read a research application that contained no research methods at all.

Well, that’s not exactly true.

In an eight-page project description, there were exactly three sentences that described the methods. Let’s say it went something like this:

  • There was to be some fieldwork (to unspecified locations),
  • Which would be analysed in workshops (for unspecified people), and
  • There would be analysis with a machine (for unspecified reasons).

In essence, that was the methods section.

As you might imagine, this led to a difficult (but very productive) discussion with the project leader about what they really planned to do. They knew what they wanted to do, and that conversation teased this out. I thought that I might replicate some of that discussion here, as it might be useful for other people, too.

I’ve noticed that most researchers find it easy to write about the background to their project, but it’s much more difficult to have them describe their methods in any detail.

In part, this is a product of how we write journal articles. Journal articles describe, in some detail, what happened in the past. They look backwards. Research applications, in contrast, look forwards. They describe what we plan to do. It is much harder to think about the future, in detail, than it is to remember what happened in the carefully documented past.

As a result, I often write on draft applications ‘less background, more methods’. Underlying that statement is an assumption that everybody knows how to write a good methods section. Given that people often fail, that is clearly a false assumption.

So, here is a relatively simple way to work out what should go into your methods section.

READ MORE

Lazy sampling

An old wooden post with grass growing out the top.

Old post, by Jonathan O’Donnell on Flickr

There are lots of different sorts of sampling techniques (both random and non-random) and a myriad of books that explain the best one to use for any given methodology. ‘Snowball sampling‘ is my favourite – you start with one person and ‘sample’ them, then you ask them who you should talk to next, and so on. I like the convenience (when it is used well), and I love the name.

It seems to me that lots of people use a technique that I don’t like at all.

Let’s call it “Lazy sampling”.

“Participants in the study were 35 undergraduate students (24 women, 11 men) aged 18 to 26, recruited from a large university in [the area where the authors work]. We recruited participants in the [sociology] lab at the main campus and many received extra credit in their courses for participating in the study.”
[Information obscured so as not to embarrass the authors]

Just to be clear – this was not an educational research paper. It wasn’t talking about pedagogy or course development. It wasn’t a study about tertiary education. They were looking at a general social issue, using university students as their sample.

In selecting their sample, the researchers made the following decisions:

  • They drew their sample from the University where they work, or (at best) a university nearby.
  • They drew their sample from one laboratory on one campus.
  • They either constrained the age of their sample so that only students would be selected, or they defined their age range after they had seen the ages of the students.
  • They either constrained their sample to students or they worked within a reward structure that resulted in only students applying. No admin staff, no faculty staff, no visitors to the laboratory…
  • They either didn’t care about gender or they didn’t try to recruit for even numbers.

Why? Why would anybody constrain their sample so tightly? One laboratory – what is that about ? Why would anyone exclude staff or visitors to the university? As a researcher, why wouldn’t you step outside the university and recruit from the local town or city? Wouldn’t it make your study stronger? Any expansion of the sample would have made this paper more interesting.

I don’t like this lazy sampling. I don’t like it at all. It really disappoints me when I open a promising article, only to find that the sample was this small and constrained.

Putting aside my own personal disappointment, lazy sampling is poor research.

READ MORE