Facilitating peer review with cognitive computing

Share this post:

Applying for or allocating funding for scientific research takes up a significant amount of time and energy, both from the scientists and the national foundations that evaluate the science itself. Almost 10 percent of the NSF budget is allocated to the review and management of awards, and this figure is nearly 20 percent for the European Commission. The question of how to distribute funding for scientists is not a new one, but in a market of increasing numbers of scientists and proposals applying for a decreasing amount of grant money, it is attracting renewed attention. There is ample opportunity here for cognitive computing to help streamline and secure the process. Our research into reviewing these mechanisms may enable participation of more reviewers, decreasing centralized overhead.

(Image: AJ Cann/Flickr, CC BY-SA)

(Image: AJ Cann/Flickr, CC BY-SA)

Recently, Science Magazine spotlighted a novel proposal for distributing scientific funding proposed by Johan Bollen, David Crandall, Damion Junk, Yin Ding, and Katy Borner.

The mechanism proposed by Bollen et al. is a type of crowdsourcing: funding is allocated in a peer-to-peer manner in order to capture the wisdom of the crowd. Under this proposal, in its first year of implementation, all scientists who are employed at research universities in the US receive some base level of funding (say $100,000). In each subsequent year they are required to donate 50 percent of this funding to other scientists (filling up that scientist’s account for year two) and spend 50 percent on their own research program. In each subsequent stage the scientist is allowed to spend 50 percent of their account on themselves and must give away 5 percent of their funding.

This attempts to address one of the key questions of reviewing mechanisms: how do institutions coordinate the necessary peer review without overloading a small number of experts, all while maintaining reviewing quality? If we could distribute the review load amongst all the proposers this would allow peer review to scale – effectively democratizing science and bringing in more voices.

Recently, the National Science Foundation piloted a program moving away from traditional panel review and towards a more crowdsourced system in its Astronomy Program. Looking closer, the common problem that both Bollen et al. and the NSF are attempting to solve is much more general than just funding, and much more intricate. It is a problem called the Peer Selection Problem in the economics and computer science literature.

The Peer Selection Problem occurs when the set of voters (e.g., reviewers, rankers) and candidates (e.g., proposals, assignments, movies) coincide. In this setting, given the evaluations of our peers, we want to select a small set of winners or best alternatives such as a peer review of scientific funding or a film maker’s guild selecting best picture winners. Peer selection has become an important topic in recent years for numerous applications: academic peer review including NSF grant reviews; crowdsourcing corporate or internal brainstorming sessions; and performing peer review for MOOCs. However, an obvious incentive problem arises in this setting: participants may lie about their valuation for other agents in order to increase their own chances of being selected.

We recently proposed a new method, dollar partition, at the AAAI Conference on Artificial Intelligence. Our new algorithm is inspired by the ‘Sharing a Dollar’ mechanisms proposed by De-Clippel, Moulin, and Tideman (2008). The algorithm divides peers randomly into groups and asked to provide reviews to peers that lie outside their existing group. Using these reports, we apportion how many proposals can be selected from each group dynamically, based on the reviews. We then select the top set of proposals from each group as the winners. Our algorithm is strategy-proof, has better empirical performance than any other mechanism in the literature, and is available on GitHub!

The idea proposed by Bollen et al., is similar to the “sharing a dollar” mechanism, and therefore it shares several properties with our algorithm. However, our algorithm may help alleviate some of its problems: our division into groups may help address the issue of collusion rings, e.g., co-authors have to be in the same group (though the graph of acquaintances may be a connected graph); or one can divide it into research interests. And the issue of not enough money being given to projects is solved by choosing the “top graded” of each group, instead of allocating the money as individual dollars.

A perhaps more pressing problem with the algorithm proposed by Bollen et al., that goes unmentioned in the article, is one of discoverability – how will young researchers get a chance to get funding and do projects, when few have heard of them? Will well-known researchers get all the money? In any case, we think there is ample opportunity to revise the system of peer review both for scientific funding and even the brainstorming sessions that happen during everyday meetings. New algorithms and techniques are being developed in this space. It is important that we take into account all aspects of the decision making process and selection rules not be chosen in an ad-hoc manner. Careful study can and should lead to superior mechanisms.

Strategyproof peer selection: mechanisms, analyses, and experiments
Published in: Proceeding AAAI’16 Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
Pages 390-396

About the authors

Haris Aziz is a Senior Research Scientist at Data61, CSIRO, and UNSW.
Omer Lev is a post-doctoral fellow at the Department of Computer Science of the University of Toronto.
Nicholas Mattei is a Research Staff Member at IBM TJ Watson Research Center.


More Education stories

Archaeologists Seek to Unearth Mysterious Geoglyphs in Peru Using IBM AI and Geospatial Data

After uncovering a new Nasca Line formation with IBM Watson Machine Learning Accelerator on IBM Power Systems, Yamagata University will deploy IBM PAIRS in the hopes of further discoveries with AI.

Continue reading

IBM’s African Scientists Look to Tackle the Continent’s Pressing Healthcare Challenges with AI

A majority of African countries still have fewer than one doctor for every one thousand people, and with the African population expected reach 1.6 billion by 2030, up from 1 billion in 2010, this is not just problematic, but catastrophic. Just as alarming is that the World Health Organization projects that by 2020 the burden […]

Continue reading

Progress in IBM & Hartree Collaboration Reduces Physical Prototype Testing, Protects Crops from Pests and Improves Mobile Phone Coverage

More than 24 months ago, IBM and the Science and Technology Facilities Council (STFC) Hartree Centre set an ambitious goal for themselves — enable UK businesses to use modelling, simulation and Big Data Analytics on real problems to develop better products and services that will boost productivity, drive growth, increase UK competitiveness and create jobs. […]

Continue reading