Every year, after researchers have spent many hundreds of hours compiling their grant applications, the Australian Research Council (ARC) convenes several panels of experts to decide who gets funded. The overall grant process is detailed on the ARC website but, with only 15–20 per cent of proposals being funded, and the return rate on budget funding sitting at around 65 per cent (Discovery Project Selection Report for Funding Commencing in 2015), the details of the selection process are particularly important.
In August, I was given the opportunity to observe the ARC Discovery Project grants selection process and provide feedback on the process. This article details my observations. It is important to state that these are my perceptions and do not necessarily represent ARC policy.
The selection meeting takes place in mid August in the ARC offices in Canberra, with separate panels for each discipline area made up from Colleges of Experts (CoEs). These are experts in their fields with substantial research backgrounds who sit on the selection committee for up to three years each.
At this stage, the grant applications have already been through a long journey. First, each application is assigned two CoE members, defined as Carriage 1 and Carriage 2, who have responsibility for overseeing the reviews of that application. They send the application to experts in the field who score it from A to E on:
Each reviewer also ranks each application they receive (typically one to 10 applications per reviewer) in order of overall quality. At this time, the applicant is sent the reviews and asked for their rejoinders. Carriage 1 and 2 then review the applications themselves, scoring and ranking them. The Carriages will have 50 to 110 applications each to review and they will aim to identify those which they view as fundable (the top 20 per cent or so). From here, the research management system (RMS) program takes the 700 or so applications for each discipline and uses all the reviewer and CoE scores and rankings to provide an overall ranking for every application in that field from one to 700. This is the point at which I joined the CoE members to observe the process.
Each committee sits in a room with around 17 CoE members, one ARC director, three administrative staff, and one person controlling entry into the meeting. The RMS system is used to run the process; each member has their laptop connected to the web interface. Seating is around tables in a U-shape, with a projector showing the application to be discussed (although each member can access application details, reviews, rejoinders or other documents on their own interface). One member chairs the panel, with a co-chair for times when the chair has a conflict of interest. Before the start of the day, any such conflicts are declared—for instance, where there is an application by a member’s own university or collaborators. When an application is to be discussed where there is a conflict, the member is asked to leave the room prior to details being shown; their web interface blocks them from seeing the details and funding of those applications.
At the start of the Discovery Project session, members were reminded that they needed to go through around 160 applications to award funding. Prior to starting, they were asked to flag for discussion any applications about which the external reviewers disagreed substantially with the carriage members.
The discussions started by considering the top-ranked application, an application close to the funding line, and an application that was well below the funding line, to set the benchmarks for funding.
The Carriage 1 member gave a brief (or sometimes not so brief) overview of each application, with reviews and rejoinders, and said whether they thought the application should be funded or not, with Carriage 2 then providing their input. Each member of the panel could add their comments, questions or suggestions. If the application was deemed fundable by both carriages, with no member objections, it was marked as fundable. If both carriages believed it did not warrant funding, then it was marked as not funded. When the carriages disagreed, the decision about whether to shortlist it went to a vote; this decided whether the application should be revisited at a later point. With all the members voting (on their laptops through the RMS system), only one ‘yes’ vote was required to shortlist an application—more ‘yes’ votes moved it further up the shortlist. If all members voted not to shortlist the application, it was not funded.
Where highly ranked applications, well within the funding range, were not funded, a reason had to be given. For applications nearer the uncertainty band (ranked higher than around 130 of 700), all members had to vote as to whether it was funded or not, even when both carriages agreed that it was fundable.
After the top-ranked applications had been discussed, the flagged applications were discussed. These were the applications where the external reviewers disagreed with the carriage members; in this case the rejoinders and the comments of the reviewers became important in determining differences in ranking.
Once funding for the higher-ranked applications had been agreed on, shortlisted applications were reviewed. For an application to be deemed fundable, 50 per cent of members had to vote ‘yes’. At this point, five additional applications were selected as backups; this was to ensure that if some applications were later deemed conflicted or double dipping, or if they were withdrawn, additional applications could be awarded.
For each application, after funding was either approved or denied, the budget was discussed. At this stage, unnecessary funding requests were removed, while ensuring that the research could still be completed. The funding range appeared to be similar to last year, with around 65 per cent of project costs funded.
The following points discuss my observations on the process (remembering that these are not necessarily ARC policy).
The process I observed in the allocation of grants was highly transparent, open and fair. The willingness to have me observe spoke to the fact that Aidan Byrne (ARC’s CEO) and the rest of the ARC team aim for a process that supports the best researchers with public money. The amount of work that goes into reviewing the applications and deciding on their funding potential is staggering. The College of Experts clearly works hard! There are very few egos in the process and they all appear to take their role very seriously. The last grants were treated with the same level of rigour as the first. Without a doubt, the ARC CoE would like to fund more grants but, with a limited pot of money, they must balance the number of grants funded with the amount each grant is allocated. They do this with a great deal of integrity and with an openness that, as a taxpayer, is comforting.
The EMCR Forum would like to thank the ARC for allowing us to observe their process and for their hospitality during this time.
© 2018 Australian Academy of Science