AI Evaluations and Screening - A Detailed Study on Human-AI Collaboration in Screening Efficiency and Decision-Making

This study investigates the integration of artificial intelligence (AI) in the screening processes of early-stage innovations, traditionally conducted by human evaluators, across various professional and competitive settings. Through a randomized controlled trial involving around 400 participants from the MIT Solve expert internal screener team and from community leveraged startups screeners, this research explores whether AI-assisted human evaluators or AI-only evaluations enhance the efficiency and quality of decision-making compared to traditional human-only evaluations. Outcomes measured include the time efficiency of evaluations, consistency and convergence of decisions, evaluator confidence, and the overall quality of decisions across three conditions: control (no AI assistance), Treatment A (basic AI assistance), and Treatment B (advanced AI assistance providing detailed rationales). The findings aim to delineate the conditions under which human-AI collaboration optimizes evaluation outcomes of early-stage innovations, contributing to the broader discourse on effectively combining human intuition with AI’s processing capabilities. This could have significant implications for fields requiring precise and timely assessments, such as academic research, grant funding, and competitive selection processes, enhancing both theoretical understanding and practical applications of AI in evaluative tasks.