Once a behavior change program has been put in place, how can you measure its effectiveness? Here, we explain the process.
How can we know that a behavior change program has actually activated behavior change? In many cases, we can't directly track how people behave outside of a program.
Suppose, for example, a large group has worked through a program to reduce bias in recruitment. It's not feasible to directly measure the myriad of decisions that they make 'in the wild' to see if there's less bias. In these cases, the only option is to collect evidence from which to infer what happened.
This inference relies on reasoning about cause and effect. And as it happens, reasoning from cause to effect can be much easier than reasoning from effect to cause.
Let's look at an example to see why.
Based on this image, can you tell what shot was taken to cause this layout?
This can be tricky because there are a vast number of shots that could result in that layout.
Now, looking at this image, can you tell what the effect of this shot could be?
This is far simpler, as there are only a few possible results that are consistent with that shot.
This captures the difference between reasoning from effect to cause versus reasoning from cause to effect. When you need to establish how an event played out, evidence of a cause can be much stronger than evidence of an effect.
With this in mind, it's clear how, in mystery stories, masterminds manage to stay one step ahead of detectives. Masterminds have an advantage because they design the situation or triggers that cause their desired result.
If the mastermind does a good enough job, they can be confident in achieving their goal, even if they aren't able to observe it. The detective, on the other hand, has to work backwards, considering and eliminating a much larger range of possibilities. And if any evidence is missing, they may never solve the case.
While a detective tries to find out what happened, a mastermind makes things happen.
When evaluating a behavior change program, we tend to only assume the role of detective. To measure the effect of a program, we look at surveys and assessments, and if available, we look at any external evidence of impact that we can get our hands on.
Unfortunately, reality does not reveal her secrets lightly. We're often faced with low completion rates for assessments or surveys, and additional evidence can be sparse and inconclusive. And even when we do have clear evidence that the desired behavior is occurring, how do we know that our program actually caused that change?
Now, step into the shoes of the mastermind. You've done your homework and properly understand the context, the need, and the constraints surrounding a desired behavior change. And as such, you identify the most relevant science and behavior models for your purpose.
Drawing on these sources, you include carefully crafted behavioral triggers into your program along with data collection points for those triggers. Now, when a user engages with a trigger, you have evidence that shows if a cause has taken place. And, because you have a reliable behavioral model, you can infer what effect most likely followed. You lined up a shot and you know the user took it. You know you've activated behavior change. If you then combine this with evidence of effects, you'll have a much clearer and more comprehensive view of a program's efficacy and impact.
To make this more concrete, let's return to and build on the example of reducing bias in recruitment. Suppose that the program includes a trigger. This trigger prompts participants to formalize the practice of requesting and reviewing anonymised resumes.
By design, there's a data collection point that tracks if someone follows through on this specified action. Because an anonymised resume excludes many features that inform bias, it follows that opportunities for bias will decrease. So, if the evidence shows that participants engaged with the trigger, then the desired effect is more likely. The more evidence there is of successful triggers, the higher the chance of activating behavior change.
So, to evaluate behavior change and change management programs, measuring effects can provide necessary and valuable evidence, but it's often not enough. We should also carefully design and measure causes. Be the detective, but also be the mastermind.