How I assess a funding application: Part 1 – track record

Now that our Discovery applications have been fed into the gaping maw of the Australian Research Council (ARC) competition, I thought I’d take my 2-part series of posts about assessing funding applications out for a spin. Part 1 focuses on track-records and the research team. Part 2 will address an application’s overall feasibility.


“It’s all a lottery!”

“You need to game the system or you haven’t got a hope.”

“Only those who’ve had them before will get one.”

Sea of Wisdom temple (Beijing) by Jonathan O’Donnell (on Flickr)

The urban myths circulating about grant rounds are as tenacious as those about waking up in ice-filled bathtubs and realising you’ve had your kidney harvested.

No doubt, spending so much time and investing intellectual resources in a major application makes the lack of success bite that much deeper.

Having been around the traps as a supplicant, awardee, assessor, and now advisor, I’d have to say that most funding assessment processes do end up giving money to the strongest teams and most compelling projects. This isn’t to say that the processes or choices are always perfect, or that rogue results (in good and bad ways) don’t pop up. There’s always that story of the ARC Discovery that was written over a weekend and got up.

This post is about how I assess funding applications and, in particular, the track-record components. Over my academic career, I’ve:

  • Been part of judging panels for niche academic association committees that gave out travel and small grants,
  • Been invited onto a university’s fellowship selection panel, and
  • Assessed for a bunch of international funding bodies (in Australia, Canada, and Hong Kong).

I’m not claiming that my process is necessarily best practice, but I thought it might be useful for you to gain insight into one assessor’s valuations (and, it has to be said, biases).

Each funding scheme’s selection criteria may differ in detail but the two basic elements of track-record and project idea are always there.

The role of the assessor, for me, is in gauging the quality and feasibility of the overall proposition. The fact that the ARC now gives ‘feasibility’ an overt weighting in the Discovery scheme gives rise to interesting conversation (but that’s for another post!).

What do I look for when assessing the track-records of researchers on grant applicants?

It’s naff to say this but I do look for that mythical ‘excellence’, which means different things to different people. It’s a factor that also differs from discipline to discipline. To me, excellence means a fabulous publication record (good quality productivity); strong and real networks of collaborators and community/industry links (if relevant); and research that demonstrates this person or team is doing good stuff (whether it’s creating momentum for a field, showing initiative, whatever – basically, making things happen).

The competitiveness of many funding rounds means that excellent track-records become an expectation. As Mark Bisby, former VP Research for the Canadian Institutes of Health Research, has said: “It’s not a test, it’s a contest.” (Thanks, Jo VanEvery, for the quote and its provenance!)

What else is there that will set this person or team apart? The X-factor elements are what make an application stand out.

Some of the things that make me take notice are:

  • Excellent, existing relationships with relevant parties. For example, it’s one thing to say that you “have great networks in the social justice and community welfare sector”, and quite another to present evidence that you’ve already collaborated with the Sacred Heart Mission on other (smaller) grants or projects, regularly included them in your outreach activities and consultations, and already have them (and their personnel/resources, formally or informally) on board for this current proposal. Basically, have you had community/industry partners, grown your relationship, and kept them happy?
  • Getting your research findings out there. These kinds of things can sometimes be included under ‘impact’ factors, which are increasingly becoming an expected element in funding applications. I’m not talking here only of going back to community/industry to share findings (though that is part of it), it’s how you’ve been innovative or diligent in ensuring that the funding you had before was extremely well spent because the research has longevity and resonance long after it took place. Where did your previous research go? More to the point, where did you take it? Did it become a bigger concern, scaled up and involving more collaborators, other industries or countries? Did a policy paper or inquiry develop because of the work you did on an issue? Who was it relevant to, and did it gain traction? If it was a conceptual endeavour, did anyone pick it up and use it? Yes, this could be as simple as a glorious citation count, but it’s more about shifting a field (or a part of a field). Has your previous work made a difference to the area? You don’t only publish like a fiend, but that work forges new intellectual paths.
  • Very shiny recognition by others. This factor helps more in assessing an individual researcher’s ‘star’ quality, but it can also demonstrate the across-the-board international regard for the proposed team. As much as we might kvetch about the contested terrain of what constitutes prestige in our fields (and academia in general), the elements are there. Particularly for gauging international profile and regard, I look for whether the researcher has been an invited keynote or speaker and what kind of event they were fronting. If it’s the peak annual conference of a humongous association internationally, it probably counts for a bit more than a seminar in front of a home-town crowd (even if it’s not their home institution). I also look for visiting fellowship or scholar appointments. These can sometimes be more about networking and luck than recognition per se. What I mean is that if you happen to have made a connection or a buddy at Oxford University, they can often agitate to bring you in (e.g. put you forward as scholar to be invited, line up units within the institution who can chip in for your expenses and improve your level of institutional engagement via more seminars/ workshops). To me, seeing visiting fellowships or invited speaker roles indicates that you’re good enough to score these because of your profile and moving+shaking status AND people you know are willing to go in to bat for funding for you.

On a grant application, the research team isn’t the only thing assessed, but it’s usually a significant portion of the assessment weighting and, if the personnel don’t convince me of the quality outcomes they can produce from the work, it wouldn’t score well.

As a last (possibly long) point, there are expectations that I have of the lead investigator on grants, and these expectations hold for sole investigators, too:

  • What are the lead’s project planning skills like?
  • Where’s their experience in leading research, or teams, or projects in general?
  • Do they profile well as good people managers (e.g. institutional and people networks they maintain, collaborators with whom they stay close, mentoring relationships and initiatives)?

When I’m looking at these aspects, I’m not expecting a list of successfully completed ARCs or other major grants (though, it must be said, this never hurts). These kinds of project leadership skills can be demonstrated through other channels: editing journals and books, convening conferences or symposiums, chairing research-oriented committees, or taking on executive roles and responsibilities in professional associations. To me, investing your time in growing your field and giving back to the profession through your ability to make things happen? That’s leadership.

Short version: It’s demonstrating that you’re willing to take responsibility, that you have initiative, and that your previous ideas and activities had successful outcomes.

For example, in the spirit of demonstrating (potential) leadership, don’t just talk about guest-editing a special issue journal. Talk about how that special issue journal is the first time Australian academics have collaborated with that particular peak US publication, and present evidence about how the project pushed your field ahead, put it on the world stage, or changed the way work is now done in the area.

All grant applications are a leap of faith, and major ones especially so. You have limited control over who your assessors will be and how your application will be ushered through the expert committees.

As someone who wants to be in the game, though, keep your track-record as competitive as possible (and this doesn’t mean only ‘excellent’) and be assured that just about everyone – academic superstars included – gets rejected for funding.

Don’t read it as failure (instead, work out what to do next).

Think of it as part of the broader grant-application cycle that spurs you to develop it further, so you can land the funding next time!

>> PART 2 – Feasibility

4 comments

  1. This is one of the most comprehensive explanations I have seen and as an early career researcher I really appreciate your perspective. Thankyou, just love this post and it will help me greatly not only in building my track record but also how to frame what I have done.

    Like

    • Thanks, Narelle. Really glad it’s helpful to you. A lot of the work in grant applications is working how to best present your work and track-record. It’s a fine art to put a good, effective spin on things. There’s nothing worse than assessing a grant where the applicant(s) are talking themselves up to an embarrassing level. It’s always a good idea to produce evidence of achievements, not just claims.

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.