Header


Saturday, November 28, 2015

Minimal Manipulations

What's the weak link in your path diagram?

Psychology is the study of relationships between intangible constructs as seen through the lens of our measures and manipulations. We use manipulation A to push on construct X, then look at the resulting changes in construct Y, as estimated by measurement B.

Sometimes it's not clear how one should manipulate construct X. How would we make participants feel self-affirmed? Or if we wanted participants to slow down and really think about a problem? Or conversely, how would we get them to think less and go with their gut feeling? While we have a whole subfield dedicated to measurement (psychometrics), methods and manipulations have historically received less attention and less journal space.

So what can we do when we don't know how to manipulate something? One lowest-common-denominator manipulation of these complicated constructs is to ask participants to think about (or, if we're feeling ambitious, to write about) a time when they exhibited Construct X. That, it's assumed, will lead them to feel more Construct X and lead them to exhibit behaviors consistent with greater levels of Construct X.

I wonder, though, at the statistical power of such experiments. Will remembering a time your hunch was correct lead you to substantially greater levels of intuition use for the next 15 minutes? Will writing about a time you felt good about yourself lead you to achieve a peaceful state of self-affirmation where you can accept evidence that conflicts with your views?

Effect-Size Trickle-Down
If we think about an experiment as a path diagram, it becomes clear that a strong manipulation is necessary. When we examine the relationship between constructs X and Y, what we're really looking at is the relationship between manipulation A and measurement B.


Rectangles represent the things we can measure, ovals represent latent constructs, and arrows represent paths of varying strengths. Path b1 is the strength of Manipulation A. Path b2 is the relationship of interest, the association between Constructs X and Y. Path b3 is the reliability of Measurement B. Path b4 is the reliability of the measurement of the manipulation check.


Although path b2 is what we want to test, we don't get to see it directly. X and Y are latent and not observable. Instead, the path that we see is the relationship between Manipulation A and Measurement B. This relationship has to go through all three paths, and so it has strength = b1 × b2 × b3. Since each path is a correlation between -1 and +1, the magnitude of b1 × b2 × b3 must be equal to or less than that of each individual path.

This means that your effect on the dependent variable is almost certain to be smaller than the effect on the manipulation check. Things start with the manipulation and trickle down from there. If the manipulation can only barely nudge the manipulated construct, then you're certain not to see effects of the manipulation on the downstream outcome.

Minimal Manipulations in the Journals
I wonder if these writing manipulations are effective. One time I reviewed a paper using such a manipulation. Experimental assignment had only a marginally significant effect on the manipulation check. Nevertheless, the authors managed to find significant differences in the outcome across experimental conditions. Is that plausible?

I've since found another published (!) paper with such a manipulation. In Experiment 1, the manipulation check was not significant, but the anticipated effect was. In Experiment 2, the authors didn't bother to check the manipulation any further.

This might be another reason to be skeptical about social priming: manipulations such as briefly holding a warm cup of coffee are by nature minimal manipulations. Even if one expected a strong relationship between feelings of bodily warmth and feelings of interpersonal warmth, the brief exposure to warm coffee might not be enough to create strong feelings of bodily warmth.

(As an aside, it occurs to me that these minimal manipulations might be why, in part, college undergraduates think the mind is such a brittle thing. Their social psychology courses have taught them that the brief recounting of an unpleasant experience has pronounced effects on subsequent behavior.)

Ways Forward
Creating powerful and reliable manipulations is challenging. Going forward, we should be:
1) Skeptical of experiments using weak manipulations, as their statistical power is likely poor, but
2) Understanding and patient about the complexities and challenges of manipulations
3) Careful to share methodological details, including effect sizes on manipulation checks, so that researchers can share what manipulations do and do not work, and
4) Grateful for methods papers that carefully outline the efficacy and validity of manipulations.

Sunday, November 22, 2015

The p-value would have been lower if...

One is often asked, it seems, to extend someone a p-value on credit. "The p-value would be lower if we'd had more subjects." "The p-value would have been lower if we'd had a stronger manipulation." "The p-value would have been lower with a cleaner measurement, a continuous instead of a dichotomous outcome, the absence of a ceiling effect..."

These claims could be true, or they could be false, conditional on one thing: Whether the null hypothesis is true or false. This is, of course, a tricky thing to condition on. The experiment itself should be telling us the evidence for or against the null hypothesis.

So now we see that these statements are very clearly begging the question. Perhaps the most accurate formulation would be, "I would have stronger evidence that the null were false if the null were false and I had stronger evidence." It is perfectly circuitous.

When I see a claim like this, I imagine a cockney ragamuffin pleading, "I'll have the p-value next week, bruv, sware on me mum." But one can't issue an IOU for evidence.