The evaluation of novel projects lies at the heart of scientific and technological innovation, and yet literature suggests that this process is subject to inconsistency and potential biases. This paper investigates the role of information sharing among experts as the driver of evaluation decisions. We designed and executed two field experiments in two separate grant funding opportunities at a leading research university to explore evaluators’ receptivity to assessments from other evaluators. Collectively, our experiments mobilized 369 evaluators from seven universities to evaluate 97 projects resulting in 760 proposal-evaluation pairs and over $300,000 in awards. We exogenously varied two key aspects of information sharing: 1) the intellectual distance between each focal evaluator and the other evaluators and 2) the relative valence (positive and negative) of others’ scores, to determine how these treatments affect the focal evaluator’s propensity to change the initial score. Although the intellectual similarity treatment did not yield a measurable effect, we found causal evidence of negativity bias, where evaluators are more likely to lower their scores after seeing critical scores than raise them after seeing better scores. Qualitative coding and topic modeling of the evaluators’ justifications for score changes reveal that exposures to low scores prompted greater attention to uncovering weaknesses, whereas exposures to neutral or high scores were associated with strengths, along with greater emphasis on non-evaluation criteria, such as confidence in one’s judgment. Overall, information sharing among expert evaluators can lead to more conservative allocation decisions that favors protecting against failure than maximizing success.