The magic formula
18 April 2017 4 Comments
This article first appeared in Funding Insight on 2 March 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.
I’ll let you in on a little secret – there is no magic formula for getting a research grant.
It always comes as a surprise to me when I come across people who think that there is some magic to it. They don’t express it that way, of course. They say things like:
- You need a Professor on your team to get a grant.
- They are only funding [random topic] this year.
- It is all about the title – you need to grab them with the title.
I know a senior researcher who believes that getting your application started early will give you a greater chance of success. That sounded fine to me, until he explained that a lower serial number on his application is the ‘secret sauce’. That is, if he gets in early and grabs a ‘low number’ he has more chance than people with a high serial number. Srsly!
They have no evidence for these beliefs. They have a hunch, or folklore, or superstition. That is, they are in the realm of magical thinking.
Why would a researcher believe in magic? We work in an evidence-based world. This is a university! We respect data. At least, that’s the theory.
One of the reasons that people apply magical thinking to research grants is that the granting system is often a black box. That is, a grant application goes into the box, something happens outside of our view (and our control), and then a result comes out. Black boxes promote magical thinking. You can’t see what happens inside the box, so you invent a story to explain the results.
This is particularly true when there is an element of chance in the process (which there always is). You want to control the outcome, but you can’t. So, you tend to give greater agency to the elements that you can control. You didn’t get the grant this time, but you did last time – what was different? [Insert post-hoc reasoning here.] Conclusion: It must have been your lucky shoes! Turns out, you have to wear the lucky shoes to get the grant. Magical thinking.
I’m as guilty as anyone
When I look at my own practice – advising people how to improve their grant applications – I’m as guilty as anyone. Here are some of the things that I believe, with no real evidence at all:
- Smaller budgets reduce the risk, so they are easier to fund.
- It is harder to get a grant if there is only one researcher (rather than a team).
- You really need a book to be competitive for [a specific scheme].
At least my magical thinking is based on years of experience. [Note to self – this is not a reasonable excuse.]
The trouble is I have no evidence for these beliefs, these hunches. I’m drawing on years of experience, and a close reading of the rules. But that isn’t the same as actual data.
When I look at the data, I’m often surprised. For a particular fellowship, I’ve often said that you really need a book to be competitive. Someone challenged me on that. They didn’t have a book, so they asked how many journal articles were equal to a book (correct answer: there is no conversion rate between articles and books). When I looked at the data, I found that our successful candidates had very varied publication histories. Some had published a book, some hadn’t. Some had lots of articles; others had fewer articles, but in the highest quality journals.
There was no pattern.
There also wasn’t a lot of data. My university isn’t a tier-one university, and I only work with one part of the university, so I don’t have a lot of successful applications to work with. If I add in the unsuccessful applications, I have a larger sample, but no more indicators of success. So, that analysis can probably be a stronger predictor of failure, but failure isn’t want I’m trying to predict here.
Still, I have more data than my applicants do. I read, on average, 50-100 grant applications per year. They write, on average, probably one.
What you can do with data
When you do have the data, and you do the analysis, the results can be really very useful. Last year, I spoke with a research whisperer from Northwestern University. She had systematically analysed the publications of all the people who had got tenure in Engineering at Northwestern. She had data, so she crunched the numbers. She was able to say to an incoming academic, “This is what you probably need to get tenure here.” That’s brilliant! It means that people have a clear target, rather than a nebulous set of demands (Publish more! Publish better!). It means that they can measure their progress, and can decide if they want to do that much work. They can assess their chances.
You can do this yourself, in a small way. A friend of mine looked at the top fifty people in the country in his particular field. They all had their publication lists online, so he just crunched the numbers. Turns out he didn’t need to do much to get from where he was to where they were (particularly the tail end of those 50 people). It made it achievable. It made it measurable. It made it real.
I’m sure that there are a lot of grant offices out there that are already doing this sort of analysis on their successful and unsuccessful grants. I know that ours has done it, to a limited extent, for some schemes. However, I don’t actually hear much about it. I certainly don’t hear about it being communicated to the applicants in a clear and coherent manner.
We need to do this more. We need to guard against hearsay, folklore, and magical thinking. We need to be clear when our advice is based on experience, and when it is based on data. We need to base our advice on data where we can, and we need to make that advice (and the underlying data) available to our applicants.
I think that granting bodies could to do this, too. They have all the data for all the applications. They probably can’t say with any certainty what will succeed (there is always that element of chance), but they could say with some certainty what will fail. If they can show a clear pattern of failure, then that should discourage people who fit that pattern from applying. Of course, people will try to game the system, but I’d prefer a system underpinned by data than a system underpinned by magic.
If only they could tell us what the formula is.
There is a formula, but it isn’t magic
It turns out that funding bodies have done this sort of analysis. They generally haven’t done it in the brute-force quantitative way that I’ve described. Instead, they’ve taken a qualitative approach. They’ve considered what they are seeking from applicants. They’ve combined that with all their experience in reviewing the applications and giving out the grants. And they’ve written it down as the Rules.
Every funding scheme has a set of Rules. They can be a five-page document with pictures, or a 57-page legislative instrument with appendices and accompanying FAQ (I’m looking at you, ARC Discovery rules). But there is always a set of rules.
The rules tell you what the granting scheme wants. They tell you how the process works. They are trying to demystify the black box. Most importantly, the Rules embody their experience and their data. They are evidence-based.
Work from the Rules.
Believe in the fairness of the system.
Do your best.
That’s the formula.