Of individual projects, when tested with well-powered randomised controlled trials, perhaps over 80% don’t “work”, i.e. deliver a reasonable effect size relative to cost.
Perhaps 1-10% have negative effects.
Of intervention types that have been evaluated with meta-analyses, the proportion that don’t “work” is probably lower, perhaps over 60% instead of 80%, but this is partly just because there is more attention paid to the most promising interventions.
The interventions and projects that haven’t been tested are probably worse, since more research is done on the most promising approaches.
If you consider whole “areas” (e.g. education as a whole), then the average effect is probably positive. This is what you’d expect if the area is making progress as a whole, and there is some pressure, even if weak, to close down poor programmes. This is consistent with many individual projects failing to work, and a small number of projects having strong positive effects, which is what we might expect theoretically.1
The average effect size and proportion of interventions that work probably varies significantly by area.
So is it fair to say “most social programmes don’t work?”
I think this is a little ambiguous and potentially misle