Online research recruitment as a linguist

Liubov Baladzhaeva is a PhD student at the Department of English Language and Literature, University of Haifa, Israel.

She moved to Israel from Russia eight years ago and, as a multilingual immigrant, got interested in second language acquisition and cross-linguistic influence.

She tweets at @baladzhaeval.

Liubov answered our recent call for posts about recruiting for research online, and she is the first of our generous community to do so after that call-out.

Andrew Glover wrote for us late last year about recruiting research participants using Twitter, and we realised the level of interest in this topic is very significant!


Photo by Maxime VALCARCE on unsplash.com

Photo by Maxime VALCARCE on unsplash.com

The Internet makes connecting with strangers a lot easier and it’s a great way to find potential study participants.

Especially if you need some other population than the undergrads at your university.

Especially if you don’t have money to pay people to participate in your study.

There are, broadly, two types of online recruitment:

  1. When you need people to participate in an online study (survey, questionnaire, experiments, Skype interviews, etc.). This first type can also be divided into two subtypes:
    1. when you just post a link to the survey and people click on the link and (hopefully) fill it out, and
    2. when you post the recruitment ad but then people need to receive a link/links from you or to chat with you over Skype.
  2. When you need to find people that would be able to meet with you or your research assistants in person.

For my studies, I did all of the above. Read more of this post

Advertisements

A tale of two interviews

Our anonymous author approached the Research Whisperers with this post about disrupted research interview expectations and the importance of approaching these encounters with an open mind. This is a lesson at the heart of all research, but it can be easy to build up presumptions around our skills and expertise. 

Having our intellectual expectations upended can be confronting and frustrating, but it can also be enlightening about the topic and ourselves.


Speech bubbles at Erg by Marc Walthieu | flickr.com | Shared via CC BY-NC-ND 2.0

Speech bubbles at Erg by Marc Walthieu | flickr.com | Shared via CC BY-NC-ND 2.0

I’m conducting interview-based research on a complex social problem in Australia, and I had the opportunity to interview a woman I’d been looking forward to meeting for some time.

She was the CEO of an organisation that delivers services to a marginalised group, which was an important perspective for one of my case studies. I knew she held some views on my research topic that were very similar to my own, which can help with rapport.

I expected it to be a positive experience.  It ended up being the most uncomfortable interview I’ve ever conducted, for this or any other project.

The awkwardness started before we even sat down, when I held out my hand in greeting and she (let’s call her P01) went in for a kiss on the cheek. Not the way research interviews in office settings are usually kicked off! Once we were seated, one of her first comments was that she doesn’t normally participate in research interviews, so I should feel lucky, and that she had agreed to do it this time because she could tell that I supported the organisation’s work. Also, could we keep it to 40 minutes? I assured her that very much appreciated her time, and quietly panicked at how unwelcoming this little exchange felt.

I started asking my questions, which she answered quite briefly and directly, occasionally chuckling or waiting for me to reframe my words into an actual question before responding. In my experience, participants usually respond at length at the mere mention of a topic (without necessarily waiting for me to ask a question), so muted alarm bells continued to ring.  Read more of this post

Mixed-up methods

A street sign leading to five different destinations, in Chinese and English.

Where to?, by Jonathan O’Donnell on Flickr.

I’ve been seeing a lot of applications lately where the methods section starts something like this:

In this project, we adopt a mixed methods approach…

It is a statement that I’m coming to loathe, because the author is generally saying:

Warning: muddled thinking ahead.
In the following section, we are going to provide a grab-bag of methods. Their connection to the research question will be weak, at best. Hell, they probably won’t even be well linked to each other…

There are no mixed methods

In a grant application, the purpose of the methods section is to show how you’re going to answer your research question. Not explore the question. Not interrogate the space. Not flail about without a clue.

Your methods should present a road-map from not-knowing to knowing. If you are using more than one method (and, let’s face it, who isn’t?), you need to show that your methods will:

  1. Work (get you to your goal); and
  2. Link together (be greater than the sum of their parts).

You need to show me both of those things, not just one of them (or, as is sometimes the case, neither of them).

Read more of this post

How to write a simple research methods section

Photo by Mel Hattie | unsplash.com

Photo by Mel Hattie | unsplash.com

Yesterday I read a research application that contained no research methods at all.

Well, that’s not exactly true.

In an eight-page project description, there were exactly three sentences that described the methods. Let’s say it went something like this:

  • There was to be some fieldwork (to unspecified locations),
  • Which would be analysed in workshops (for unspecified people), and
  • There would be analysis with a machine (for unspecified reasons).

In essence, that was the methods section.

As you might imagine, this led to a difficult (but very productive) discussion with the project leader about what they really planned to do. They knew what they wanted to do, and that conversation teased this out. I thought that I might replicate some of that discussion here, as it might be useful for other people, too.

I’ve noticed that most researchers find it easy to write about the background to their project, but it’s much more difficult to have them describe their methods in any detail.

In part, this is a product of how we write journal articles. Journal articles describe, in some detail, what happened in the past. They look backwards. Research applications, in contrast, look forwards. They describe what we plan to do. It is much harder to think about the future, in detail, than it is to remember what happened in the carefully documented past.

As a result, I often write on draft applications ‘less background, more methods’. Underlying that statement is an assumption that everybody knows how to write a good methods section. Given that people often fail, that is clearly a false assumption.

So, here is a relatively simple way to work out what should go into your methods section.

READ MORE

Research on a shoe-string

Dr Emily KotheDr Emily Kothe is a lecturer in psychology at Deakin University.

Emily conducted her PhD at the University of Sydney on promoting fruit and vegetable consumption to Australian young adults. She graduated in 2012.

Her honours, masters and PhD projects had a combined budget of less than $400.

Emily is in the process of writing her first set of internal grant applications as an academic staff member, and is interested in the process of developing projects in the context of conducting research on a shoe-string.


I’ve been going through my paperwork from my student days recently. In the process, I found my funding requests for my PhD research. Not including conference travel, my research expenses for my PhD were $375.95.

That included a 1-month subscription to Thinkstock to allow me to buy high quality images for use in an online intervention to increase fruit and vegetable consumption, and the purchase of the domain name that I used for the intervention website. My Honours and Masters projects, and the research I’ve been running for the year since completing my PhD, have all been conducted at zero cost (except for my time).

Save money - by shopping (Photo by Toban Black - http://www.flickr.com/photos/tobanblack)

Save money – by shopping (Photo by Toban Black – http://www.flickr.com/photos/tobanblack)

This means that in the last 6 years I’ve spent an average of $62.60 a year on research costs.

At research institutions, developing, submitting, and ultimately receiving, competitive grants is a key indicator of productivity and performance for academic staff. This means that obtaining a Category 1 grant (e.g. ARC Discovery or NHMRC Project Grant) is central to my career development.

Assuming that I want to progress in my career (spoiler alert: I do!) then I would be expected to apply for a faculty-level internal grant ($$), a university-level seed grant ($$$), then a Category 1 Grant ($$$$$$).

As a freshly minted academic staff member, I’m starting small with the preparation of a faculty-level grant (the maximum budget is $18,000, with all funds to be spent in a year). In the process of preparing this grant, I’ve had to think about spending about 47 times more on research than I have ever before. Obviously, I don’t need to ask for the whole amount, but spending months putting together a request for $62.60 in funding would be colossal waste of time for everyone involved!

READ MORE

Lazy sampling

An old wooden post with grass growing out the top.

Old post, by Jonathan O’Donnell on Flickr

There are lots of different sorts of sampling techniques (both random and non-random) and a myriad of books that explain the best one to use for any given methodology. ‘Snowball sampling‘ is my favourite – you start with one person and ‘sample’ them, then you ask them who you should talk to next, and so on. I like the convenience (when it is used well), and I love the name.

It seems to me that lots of people use a technique that I don’t like at all.

Let’s call it “Lazy sampling”.

“Participants in the study were 35 undergraduate students (24 women, 11 men) aged 18 to 26, recruited from a large university in [the area where the authors work]. We recruited participants in the [sociology] lab at the main campus and many received extra credit in their courses for participating in the study.”
[Information obscured so as not to embarrass the authors]

Just to be clear – this was not an educational research paper. It wasn’t talking about pedagogy or course development. It wasn’t a study about tertiary education. They were looking at a general social issue, using university students as their sample.

In selecting their sample, the researchers made the following decisions:

  • They drew their sample from the University where they work, or (at best) a university nearby.
  • They drew their sample from one laboratory on one campus.
  • They either constrained the age of their sample so that only students would be selected, or they defined their age range after they had seen the ages of the students.
  • They either constrained their sample to students or they worked within a reward structure that resulted in only students applying. No admin staff, no faculty staff, no visitors to the laboratory…
  • They either didn’t care about gender or they didn’t try to recruit for even numbers.

Why? Why would anybody constrain their sample so tightly? One laboratory – what is that about ? Why would anyone exclude staff or visitors to the university? As a researcher, why wouldn’t you step outside the university and recruit from the local town or city? Wouldn’t it make your study stronger? Any expansion of the sample would have made this paper more interesting.

I don’t like this lazy sampling. I don’t like it at all. It really disappoints me when I open a promising article, only to find that the sample was this small and constrained.

Putting aside my own personal disappointment, lazy sampling is poor research.

READ MORE

What’s your discipline?

The author in outrageous eyelashes and lipstick.

Check those lashes! by Jonathan O’Donnell on Flickr

I was sitting in a workshop a while back (actually, it was a lecture, but nobody calls them ‘lectures’ when staff are attending), and an eminent professor used the phrase ‘cross disciplinary’.

“That’s pretty retro,” I thought. “Everybody talks about being ‘multi-disciplinary’ or ‘inter-disciplinary’. Actually, even that is a bit passé. ‘Trans-disciplinary’ is the word of the day. What the hell do these terms mean, anyway?”

Given that most of my working life consists of writing:

‘Be precise’,
‘What exactly do you mean?’, and
‘Reword for clarity’

in the margin of draft grant applications, I thought that I should come up with some working definitions, at least for my own satisfaction.

After all, these words are fundamental to our conception of modern research. They deserve precise definitions.

READ MORE