The (lack of) impact of impact: Why impact evaluations seldom lead to evidence-based policymaking

A recurring puzzle to many academics and some policymakers is why impact evaluations, which have become something of a cottage industry in the development field, have so little impact on actual policymaking. In this paper, I study the impact of impact evaluations. I show, in a simple Bayesian framework embedded within a standard contest success function-based model of competition amongst anti-evaluation policymakers, Bayesian policymakers, and frequentist evaluators, that the likelihood of a program being cancelled is a decreasing function both of the impact estimated by the evaluation and of the prior on whose basis the program was approved to begin with. Moreover, the probability of cancellation is a decreasing function of the effectiveness of the influence exerted by frequentist evaluators.
Citation

Arcand, J.L. "The (lack of) impact of impact: Why impact evaluations seldom lead to evidence-based policymaking" Ferdi, Working paper P73, June 2013