Behind closed doors: Observing the ARC selection meeting

Michael Crichton
Michael Crichton, who observed the ARC Discovery Project grants selection process. Photo: Michael Crichton

Every year, after researchers have spent many hundreds of hours compiling their grant applications, the Australian Research Council (ARC) convenes several panels of experts to decide who gets funded. The overall grant process is detailed on the ARC website but, with only 15–20 per cent of proposals being funded, and the return rate on budget funding sitting at around 65 per cent (Discovery Project Selection Report for Funding Commencing in 2015), the details of the selection process are particularly important.

In August, I was given the opportunity to observe the ARC Discovery Project grants selection process and provide feedback on the process. This article details my observations. It is important to state that these are my perceptions and do not necessarily represent ARC policy.

How does the selection process work?

The selection meeting takes place in mid August in the ARC offices in Canberra, with separate panels for each discipline area made up from Colleges of Experts (CoEs). These are experts in their fields with substantial research backgrounds who sit on the selection committee for up to three years each.

At this stage, the grant applications have already been through a long journey. First, each application is assigned two CoE members, defined as Carriage 1 and Carriage 2, who have responsibility for overseeing the reviews of that application. They send the application to experts in the field who score it from A to E on:

  1. investigator team
  2. innovation
  3. project quality and innovation
  4. research environment.

Each reviewer also ranks each application they receive (typically one to 10 applications per reviewer) in order of overall quality. At this time, the applicant is sent the reviews and asked for their rejoinders. Carriage 1 and 2 then review the applications themselves, scoring and ranking them. The Carriages will have 50 to 110 applications each to review and they will aim to identify those which they view as fundable (the top 20 per cent or so). From here, the research management system (RMS) program takes the 700 or so applications for each discipline and uses all the reviewer and CoE scores and rankings to provide an overall ranking for every application in that field from one to 700. This is the point at which I joined the CoE members to observe the process.

Each committee sits in a room with around 17 CoE members, one ARC director, three administrative staff, and one person controlling entry into the meeting. The RMS system is used to run the process; each member has their laptop connected to the web interface. Seating is around tables in a U-shape, with a  projector showing the application to be discussed (although each member can access application details, reviews, rejoinders or other documents on their own interface). One member chairs the panel, with a co-chair for times when the chair has a conflict of interest. Before the start of the day, any such conflicts are declared—for instance, where there is an  application by a member’s own university or collaborators. When an application is to be discussed where there is a conflict, the member is asked to leave the room prior to details being shown; their web interface blocks them from seeing the details and funding of those applications.  

At the start of the Discovery Project session, members were reminded that they needed to go through around 160 applications to award funding. Prior to starting, they were asked to flag for discussion any applications about which the external reviewers disagreed substantially with the carriage members. 

The discussions started by considering the top-ranked application, an application close to the funding line, and an application that was well below the funding line, to set the benchmarks for funding.

The Carriage 1 member gave a brief (or sometimes not so brief) overview of each application, with reviews and rejoinders, and said whether they thought the application should be funded or not, with Carriage 2 then providing their input. Each member of the panel could add their comments, questions or suggestions. If the application was deemed fundable by both carriages, with no member objections, it was marked as fundable. If both carriages believed it did not warrant funding, then it was marked as not funded. When the carriages disagreed, the decision about whether to shortlist it went to a vote; this decided whether the application should be revisited at a later point. With all the members voting (on their laptops through the RMS system), only one ‘yes’ vote was required to shortlist an application—more ‘yes’ votes moved it further up the shortlist. If all members voted not to shortlist the application, it was not funded.

Where highly ranked applications, well within the funding range, were not funded, a reason had to be given. For applications nearer the uncertainty band (ranked higher than around 130 of 700), all members had to vote as to whether it was funded or not, even when both carriages agreed that it was fundable.

After the top-ranked applications had been discussed, the flagged applications were discussed. These were the applications where the external reviewers disagreed with the carriage members; in this case the rejoinders and the comments of the reviewers became important in determining differences in ranking.

Once funding for the higher-ranked applications had been agreed on, shortlisted applications were reviewed.  For an application to be deemed fundable, 50 per cent of members had to vote ‘yes’. At this point, five additional applications were selected as backups; this was to ensure that if some applications were later deemed conflicted or double dipping, or if they were withdrawn, additional applications could be awarded.

For each application, after funding was either approved or denied, the budget was discussed. At this stage, unnecessary funding requests were removed, while ensuring that the research could still be completed. The funding range appeared to be similar to last year, with around 65 per cent of project costs funded. 

What counts in the selection process?

The following points discuss my observations on the process (remembering that these are not necessarily ARC policy).

Grant allocation

  • Generally there were only about 2–3 minutes available to discuss the top applications, with more needed for ones lower on the list.
  • Reviewer expertise was valued; the opinion of a very experienced reviewer held more weight than one looking only at a single application. Reviewers giving short reviews were not looked on favourably.
  • The CoE considered that if a contributor was going to be fundamental to the project they should be included as a chief investigator (CI), rather than as a named postdoc.
  • Junior CIs with senior mentorship were viewed positively. Where applications were submitted by sole experienced investigators, questions were raised as to why they weren’t including junior CIs on the project whom they could mentor and assist with career-building.
  • Applicant’s H-index and number of publications were discussed, with nature, science and PNAS papers holding most weight.
  • Some of the comments from members suggested that the publication of research papers held more weight than opinion papers, or lots of reviews.
  • As this was the Discovery Project round, method and equipment development grants were applications were not favourably considered, if they didn’t have a clear discovery purpose following the development.
  • Rejoinders were only really discussed for lower-ranked applications, but they did help to get some applications over the line.
  • Part-way down the rankings, it was recognised that more applications had been awarded to male CIs than female CIs. Following this, in deciding between two equally ranked applications, preference was given to the female CI to ensure gender equity.
  • One member commented that it often took around eight years post-PhD for a researcher to attain a sufficient track record to be an independent CI.

Budget allocations

  • There had to be very strong justifications for all budget items, otherwise they would be cut. Just because something is allowed by the ARC doesn’t mean that it will be looked on favourably by the experts, unless strongly justified. Overall, due to the level of cuts made, the amount allocated per grant seemed to be in the region of, on average, $100,000–$200,000 per year.
  • Generally, funding appeared to be suited to funding one postdoc, some maintenance and a little travel. Requesting much more than this required a very clear justification. If a level B postdoc was asked for, there had to be a very good reason why a level A appointment wouldn’t be suitable. The view was that a postdoc must bring existing skills that are vital to the project—otherwise a PhD student could be trained.
  • RHD scholarships were often cut; requests for four-year stipends were viewed unfavourably (they should be three to three-and-a half years only).
  • Teaching relief required strong justification.
  • Travel was only supported if deemed necessary for project completion, with timing being very important; travel needed to be an asset to the project.
  • If a senior postdoc was included in the budget, the panel questioned why they were not a CI.
  • The expectation, especially for highly regarded research environments, is that researchers will have access to sufficient equipment; requests for equipment that should already be available were cut. For very large equipment purchases, the expectation was that the university would contribute. Asking for funding to set up a laboratory was not viewed favourably—it was expected that labs would already be set up by the host institution.
  • The panel could not cut a five-year grant to three years, but they could reduce the award value to $30,000 in the last two years to indicate that they didn’t see the need for a five-year project.
  • Value for money was important. If the project had a large budget it had to be a stellar project with high value to the public. If the project was expensive but near the funding cut-off, it might not make it.

Overall observations on the selection meeting

The process I observed in the allocation of grants was highly transparent, open and fair. The willingness to have me observe spoke to the fact that Aidan Byrne (ARC’s CEO) and the rest of the ARC team aim for a process that supports the best researchers with public money. The amount of work that goes into reviewing the applications and deciding on their funding potential is staggering. The College of Experts clearly works hard! There are very few egos in the process and they all appear to take their role very seriously. The last grants were treated with the same level of rigour as the first. Without a doubt, the ARC CoE would like to fund more grants but, with a limited pot of money, they must balance the number of grants funded with the amount each grant is allocated. They do this with a great deal of integrity and with an openness that, as a taxpayer, is comforting.

The EMCR Forum would like to thank the ARC for allowing us to observe their process and for their hospitality during this time.

Dr Michael Crichton
@michaelc1983
EMCR Forum Executive
Australian Academy of Science
@EMCRForum


EMCR Pathways Issue 5 October 2015

© 2024 Australian Academy of Science

Top