How can one evaluate the veracity of the claim made by MacQueen and Bradford (2017), that the treatment effect emerging from the ScotCET RCT is, in fact, attributable to the experimental design, despite the apparent failure of implementation? For this assessment, one needs to discuss some design features of ScotCET.
ScotCET was a block-randomised RCT. Blocking in experimental research involves random assignment of units to treatment and control groups within certain blocks (or strata) that are defined by a set of observed pre-treatment covariates. In ScotCET this meant that instead of randomly selecting police officers around Scotland to deliver the procedurally just messages (treatment) or to carry on with their business as usual (control), matched pairs were chosen which belonged to certain geographical locations that were similar to each other. It follows that other then the variation of the respondents’ characteristics within each pair, these blocks should be identical with respect to observed pre-treatment covariates.
Thanks to this particular design one can think about each matched pair as a single experiment, where participants were randomly allocated to a treatment or control group. After controlling for some potentially influential covariates (e.g., gender, age), the average treatment effect should be a reliable (i.e., unbiased) estimate. Yet, because we know that the implementation went awry, we cannot take these average treatment effects on face value, we should compare the different blocks to each other to establish the effect consistency across them. The next post will propose a way of doing just that.
MacQueen, Sarah and Ben Bradford. 2017. “Where Did It All Go Wrong? Implementation Failure—and More—in a Field Experiment of Procedural Justice Policing.” Journal of Experimental Criminology 13(3):321–45.