Second time around

more-detail-2Yesterday, I was providing advice to a researcher for a grant application resubmission.

You know how it goes: they had put something in last year, it had been reviewed, then rejected. I offered to have a look at it, to treat it as a first draft for this year’s application round.

It turned out that I thought that the researcher needed to:

  • Clarify the core research question,
  • Cut back on the background, and
  • Flesh out the project plan.

This is pretty standard. I tell people this a lot!

I’m thinking of getting a ‘Detail! Detail! Detail!’ t-shirt made up.

Read more of this post

Recruiting research participants using Twitter

Andrew GloverAndrew Glover is a Research Fellow at RMIT University, based in the Digital Ethnography Research Centre and the Beyond Behaviour Change Group.

He is interested in sustainability, air travel, and remote collaboration. He tweets at @theandrewglover.


Recruitment for research participants is often time-consuming work.

Emailing people directly can be effective, but does seem intrusive at times, given the amount of email many of us deal with on a daily basis.

Sometimes, you just want to get your message out there as far and wide as possible, beyond your personal and professional networks.

If you cannot join the Army - Try & get a Recruit

British WWI Recruitment Poster, by State Records NSW on Flickr

Recently, I’ve used Twitter to recruit survey and interview participants for two projects.

The first was an online survey about academic air travel in Australia, and the second was a call for interviews with people who collaborate remotely without travelling. In both cases, I’ve been impressed by the extent to which the message was distributed across the networks of people I was hoping to reach. The air travel survey was completed by over 300 academics throughout Australia, with respondents from every broad field of research. I combined this with emailing universities and academic associations directly, asking them to pass the message on to their staff and members. For the project on remote collaboration, I had 13 people respond immediately who were willing to be interviewed, including from Australia, New Zealand, Europe, and the USA. Read more of this post

Mixed-up methods

A street sign leading to five different destinations, in Chinese and English.

Where to?, by Jonathan O’Donnell on Flickr.

I’ve been seeing a lot of applications lately where the methods section starts something like this:

In this project, we adopt a mixed methods approach…

It is a statement that I’m coming to loathe, because the author is generally saying:

Warning: muddled thinking ahead.
In the following section, we are going to provide a grab-bag of methods. Their connection to the research question will be weak, at best. Hell, they probably won’t even be well linked to each other…

There are no mixed methods

In a grant application, the purpose of the methods section is to show how you’re going to answer your research question. Not explore the question. Not interrogate the space. Not flail about without a clue.

Your methods should present a road-map from not-knowing to knowing. If you are using more than one method (and, let’s face it, who isn’t?), you need to show that your methods will:

  1. Work (get you to your goal); and
  2. Link together (be greater than the sum of their parts).

You need to show me both of those things, not just one of them (or, as is sometimes the case, neither of them).

Read more of this post

How to write a simple research methods section

Photo by Mel Hattie | unsplash.com

Photo by Mel Hattie | unsplash.com

Yesterday I read a research application that contained no research methods at all.

Well, that’s not exactly true.

In an eight-page project description, there were exactly three sentences that described the methods. Let’s say it went something like this:

  • There was to be some fieldwork (to unspecified locations),
  • Which would be analysed in workshops (for unspecified people), and
  • There would be analysis with a machine (for unspecified reasons).

In essence, that was the methods section.

As you might imagine, this led to a difficult (but very productive) discussion with the project leader about what they really planned to do. They knew what they wanted to do, and that conversation teased this out. I thought that I might replicate some of that discussion here, as it might be useful for other people, too.

I’ve noticed that most researchers find it easy to write about the background to their project, but it’s much more difficult to have them describe their methods in any detail.

In part, this is a product of how we write journal articles. Journal articles describe, in some detail, what happened in the past. They look backwards. Research applications, in contrast, look forwards. They describe what we plan to do. It is much harder to think about the future, in detail, than it is to remember what happened in the carefully documented past.

As a result, I often write on draft applications ‘less background, more methods’. Underlying that statement is an assumption that everybody knows how to write a good methods section. Given that people often fail, that is clearly a false assumption.

So, here is a relatively simple way to work out what should go into your methods section.

READ MORE

Lazy sampling

An old wooden post with grass growing out the top.

Old post, by Jonathan O’Donnell on Flickr

There are lots of different sorts of sampling techniques (both random and non-random) and a myriad of books that explain the best one to use for any given methodology. ‘Snowball sampling‘ is my favourite – you start with one person and ‘sample’ them, then you ask them who you should talk to next, and so on. I like the convenience (when it is used well), and I love the name.

It seems to me that lots of people use a technique that I don’t like at all.

Let’s call it “Lazy sampling”.

“Participants in the study were 35 undergraduate students (24 women, 11 men) aged 18 to 26, recruited from a large university in [the area where the authors work]. We recruited participants in the [sociology] lab at the main campus and many received extra credit in their courses for participating in the study.”
[Information obscured so as not to embarrass the authors]

Just to be clear – this was not an educational research paper. It wasn’t talking about pedagogy or course development. It wasn’t a study about tertiary education. They were looking at a general social issue, using university students as their sample.

In selecting their sample, the researchers made the following decisions:

  • They drew their sample from the University where they work, or (at best) a university nearby.
  • They drew their sample from one laboratory on one campus.
  • They either constrained the age of their sample so that only students would be selected, or they defined their age range after they had seen the ages of the students.
  • They either constrained their sample to students or they worked within a reward structure that resulted in only students applying. No admin staff, no faculty staff, no visitors to the laboratory…
  • They either didn’t care about gender or they didn’t try to recruit for even numbers.

Why? Why would anybody constrain their sample so tightly? One laboratory – what is that about ? Why would anyone exclude staff or visitors to the university? As a researcher, why wouldn’t you step outside the university and recruit from the local town or city? Wouldn’t it make your study stronger? Any expansion of the sample would have made this paper more interesting.

I don’t like this lazy sampling. I don’t like it at all. It really disappoints me when I open a promising article, only to find that the sample was this small and constrained.

Putting aside my own personal disappointment, lazy sampling is poor research.

READ MORE