Posted in: Cognitive Computing, Education

Facilitating peer review with cognitive computing

Applying for or allocating funding for scientific research takes up a significant amount of time and energy, both from the scientists and the national foundations that evaluate the science itself. Almost 10 percent of the NSF budget is allocated to the review and management of awards, and this figure is nearly 20 percent for the European Commission. The question of how to distribute funding for scientists is not a new one, but in a market of increasing numbers of scientists and proposals applying for a decreasing amount of grant money, it is attracting renewed attention. There is ample opportunity here for cognitive computing to help streamline and secure the process. Our research into reviewing these mechanisms may enable participation of more reviewers, decreasing centralized overhead.

(Image: AJ Cann/Flickr, CC BY-SA)

(Image: AJ Cann/Flickr, CC BY-SA)

Recently, Science Magazine spotlighted a novel proposal for distributing scientific funding proposed by Johan Bollen, David Crandall, Damion Junk, Yin Ding, and Katy Borner.

The mechanism proposed by Bollen et al. is a type of crowdsourcing: funding is allocated in a peer-to-peer manner in order to capture the wisdom of the crowd. Under this proposal, in its first year of implementation, all scientists who are employed at research universities in the US receive some base level of funding (say $100,000). In each subsequent year they are required to donate 50 percent of this funding to other scientists (filling up that scientist’s account for year two) and spend 50 percent on their own research program. In each subsequent stage the scientist is allowed to spend 50 percent of their account on themselves and must give away 5 percent of their funding.

This attempts to address one of the key questions of reviewing mechanisms: how do institutions coordinate the necessary peer review without overloading a small number of experts, all while maintaining reviewing quality? If we could distribute the review load amongst all the proposers this would allow peer review to scale – effectively democratizing science and bringing in more voices.

Recently, the National Science Foundation piloted a program moving away from traditional panel review and towards a more crowdsourced system in its Astronomy Program. Looking closer, the common problem that both Bollen et al. and the NSF are attempting to solve is much more general than just funding, and much more intricate. It is a problem called the Peer Selection Problem in the economics and computer science literature.

The Peer Selection Problem occurs when the set of voters (e.g., reviewers, rankers) and candidates (e.g., proposals, assignments, movies) coincide. In this setting, given the evaluations of our peers, we want to select a small set of winners or best alternatives such as a peer review of scientific funding or a film maker’s guild selecting best picture winners. Peer selection has become an important topic in recent years for numerous applications: academic peer review including NSF grant reviews; crowdsourcing corporate or internal brainstorming sessions; and performing peer review for MOOCs. However, an obvious incentive problem arises in this setting: participants may lie about their valuation for other agents in order to increase their own chances of being selected.

We recently proposed a new method, dollar partition, at the AAAI Conference on Artificial Intelligence. Our new algorithm is inspired by the ‘Sharing a Dollar’ mechanisms proposed by De-Clippel, Moulin, and Tideman (2008). The algorithm divides peers randomly into groups and asked to provide reviews to peers that lie outside their existing group. Using these reports, we apportion how many proposals can be selected from each group dynamically, based on the reviews. We then select the top set of proposals from each group as the winners. Our algorithm is strategy-proof, has better empirical performance than any other mechanism in the literature, and is available on GitHub!

The idea proposed by Bollen et al., is similar to the “sharing a dollar” mechanism, and therefore it shares several properties with our algorithm. However, our algorithm may help alleviate some of its problems: our division into groups may help address the issue of collusion rings, e.g., co-authors have to be in the same group (though the graph of acquaintances may be a connected graph); or one can divide it into research interests. And the issue of not enough money being given to projects is solved by choosing the “top graded” of each group, instead of allocating the money as individual dollars.

A perhaps more pressing problem with the algorithm proposed by Bollen et al., that goes unmentioned in the article, is one of discoverability – how will young researchers get a chance to get funding and do projects, when few have heard of them? Will well-known researchers get all the money? In any case, we think there is ample opportunity to revise the system of peer review both for scientific funding and even the brainstorming sessions that happen during everyday meetings. New algorithms and techniques are being developed in this space. It is important that we take into account all aspects of the decision making process and selection rules not be chosen in an ad-hoc manner. Careful study can and should lead to superior mechanisms.

Strategyproof peer selection: mechanisms, analyses, and experiments
Published in: Proceeding AAAI’16 Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
Pages 390-396

About the authors

Haris Aziz is a Senior Research Scientist at Data61, CSIRO, and UNSW.
Omer Lev is a post-doctoral fellow at the Department of Computer Science of the University of Toronto.
Nicholas Mattei is a Research Staff Member at IBM TJ Watson Research Center.

Save

Comments

Add Comment

Your email address will not be published. Required fields are marked *

Haris Aziz is a senior research scientist at Data61, CSIRO and a conjoint senior lecturer at the University of New South Wales, Sydney. Omer Lev is a post-doctoral fellow at the Department of Computer Science of the University of Toronto. Nicholas Mattei is a Research Staff Member in the Cognitive Computing Group the IBM TJ Watson Research Laboratory.

Haris Aziz, Omer Lev, and Nicholas Mattei

Haris Aziz is a senior research scientist at Data61, CSIRO and a conjoint senior lecturer at the University of New South Wales, Sydney. Omer Lev is a post-doctoral fellow at the Department of Computer Science of the University of Toronto. Nicholas Mattei is a Research Staff Member in the Cognitive Computing Group the IBM TJ Watson Research Laboratory.