Please use this form to submit your study for inclusion into our database. It will be checked by a member of the Innovation Growth Lab team, who may be in contact to ask for more information. Your email address * Your name * Title * The name of the study Short summary This paper investigates the role of information sharing among experts as the driver of evaluation decisions. A brief description of the project's goals and its current state Abstract <p>The evaluation of novel projects lies at the heart of scientific and technological innovation, and yet literature suggests that this process is subject to inconsistency and potential biases. This paper investigates the role of information sharing among experts as the driver of evaluation decisions. We designed and executed two field experiments in two separate grant funding opportunities at a leading research university to explore evaluators’ receptivity to assessments from other evaluators. Collectively, our experiments mobilized 369 evaluators from seven universities to evaluate 97 projects resulting in 760 proposal-evaluation pairs and over $300,000 in awards. We exogenously varied two key aspects of information sharing: 1) the intellectual distance between each focal evaluator and the other evaluators and 2) the relative valence (positive and negative) of others’ scores, to determine how these treatments affect the focal evaluator’s propensity to change the initial score. Although the intellectual similarity treatment did not yield a measurable effect, we found causal evidence of negativity bias, where evaluators are more likely to lower their scores after seeing critical scores than raise them after seeing better scores. Qualitative coding and topic modeling of the evaluators’ justifications for score changes reveal that exposures to low scores prompted greater attention to uncovering weaknesses, whereas exposures to neutral or high scores were associated with strengths, along with greater emphasis on non-evaluation criteria, such as confidence in one’s judgment. Overall, information sharing among expert evaluators can lead to more conservative allocation decisions that favors protecting against failure than maximizing success.</p> The full abstract of the study, if available Links https://papers.ssrn.com/sol3/papers.cfm Links to any published papers and related discussions Authors * Affiliations Academic and other institutes that the authors of the study are members of Delivery partner Organisations involved in delivering the trial, if appropriate Year Year Year199419951996199719981999200020012002200320042005200620072008200920102011201220132014201520162017201820192020202120222023202420252026 Month MonthJanFebMarAprMayJunJulAugSepOctNovDec Day Day12345678910111213141516171819202122232425262728293031 Journal Journal publishing the study, if available Publication stage * Working Paper Published Ongoing Research Forthcoming Discussion Paper Research theme * Entrepreneurship Innovation Business Growth Country Country or countries where this study took place. Topics What sort of topics does the study cover? Sample attributes Hypotheses / research question Sample Trial population and sample selection Number of treatment groups Size of treatment groups Size of control group Unit of analysis Clustered? Yes No Cluster details Trial attributes Treatment description Rounds of data collection Baseline data collection and method Data collection method and data collected Evaluation Outcome variables Results Intervention costs Cost benefit ratio Reference Citation for use in academic references