Richard Cialdini is one of the most important figures in the study of persuasion and influence. He's written several influential books; including his latest, 'Pre-suasion: a Revolutionary Way to Influence and Persuade'.

I wasn't incredibly familiar with Cialdini's work, but I stumbled across this video of his interview from Inc's Idea Lab. He starts his video with this anecdote about clouds and couches– an anecdote he repeats in this article on linkedin entitled "**Why clouds get people to buy more expensive couches — and other surprisingly persuasive methods to capture your attention."**

He suggests that an online store with a cloudy background will lead people to focus on 'comfort' when making a purchasing decision; stores with pennies in the background instead drive people towards 'value'. By subtling priming and influencing where a person's attention is focused, you can dramatically reframe and reshape their purchasing decision.

This kind of claim should set off alarm bells. As we know, there is a replication crisis facing social psychology, particularly on the topic of social priming.

## Do Clouds Make You Buy Furniture?

The study that Cialdini is describing is called 'When Web Pages Influence Choice: Effects of Visual Primes on Experts and Novices' by Mandel and Johnson.

In this task, 76 Undergraduates were asked to look at the two websites above, rank their purchasing criteria, and make a purchasing decision about a couch. It does indeed demonstrate (with large effect sizes) that students greeted with the left page were more interested in 'comfort' and selected the luxury couch more frequently, while students using the right page favored the value options.

The study goes on to note that this was regardless of self-reported 'expertise in the product category', a puzzling finding that made them investigate how the prime influenced the way people interacted with the rest of the page. After all, shouldn't self-proclaimed experts in couches *weigh the facts* better, impervious to the prime?

### Clouds Don't Make You Choose A Comfy Couch

This is an instance in social science where the statistical technique is sound and the effect is real. Unfortunately, the conclusion is ridiculous.

First of all, the students each shopped at *both fake stores*. The students aren't stupid; study participants generally want to please the experimenters, and they are clearly *not supposed to make the same decision twice. *This kind of obvious participant bias is called a demand characteristic.

But even if we ignore this methodological issue, the results are still misinterpreted. People who shop at a luxury brand expect that brand to have the best luxury furniture, regardless of price; people who shop at a discount store expect that brand to have the best values. These 'social primes' are not *priming people's attention in a way that subtly controls their decision-making *as Cialdini likes to claim; they are communicating to the shopper *how the products have been curated *through branding*.*

A real person shopping for a couch will go to a store that matches their needs– if they have a tight budget, they'll go to the store with a reputation for discounts. We *believe* the store that advertises itself with pennies *has the best value– *it would be silly to buy a luxury couch at a discount store. **This study simply measures trust in brand communication**.

These undergraduates are playing along with the rules of the make-believe shopping game. They are making rational and intelligent decisions based on a reasonable heuristic– *successful discount brands in the marketplace, are, in fact, the best place for discounts*. If you put me in a pretend discount store, I will pretend shop as if I am a pretend discount shopper.

If you try influencing these undergrads with clouds or pennies when they are *actually *shopping for a couch, it will get you nowhere. If you try painting the wall behind the products at a luxury boutique with garish pennies and dollar signs, it will destroy your brand. These are the kinds of recommendations you might take away from Cialdini's interpretation of this study.

This kind of gross misinterpretation is sadly common in the social science world. It's hard to trust the statistical techniques behind many of these studies, but even once we get past the statistics, bad over-interpretations are still all too common.

Given this example, how should we approach Cialdini's other work and writings?

## Analyzing an Individual Researcher

The first question to ask is whether Cialdini is unintentionally p-hacking, misreading, or misapplying statistical techniques. The first place to start is Ulrich Schimmack's blog, where he ranks psychologists on their personal estimated replication rate based on the power of their studies. He cautions that this is preliminary and automated (based on just a few specific journals) but it's a good starting point.

### Cialdini in Numbers

We can estimate a False Discovery Rate (FDR) in a researcher's published results based on their distribution of p-values (how many 'statistically significant' p-values are imply an underlying trend that doesn't exist, or type I errors). This estimate has to do with both the type of researcher (are they mostly testing true hypotheses at high power, or looking for a needle in the haystack?) and publication biases (p-hacking, file-drawer effects, and journal publication biases will show a clustering of values just below around p-value cutoffs like p=0.05). It's possible to *control* the FDR by changing the threshold for the p-value we consider a 'true discovery'– the further down we push the p-value threshold (say down from an *alpha* of 0.05 to 0.01), the lower the FDR for the studies that we consider significant.

We can also look at the Observed Discovery Rate, or ODR (the number of significant results in their papers vs the total number of results), and compare it to a computed Expected Discovery Rate (EDR) based on the average *statistical power* of a researcher's studies (calculated using the 'significant results'– which we assume are all reported– but extrapolated to estimate the full number of 'insignificant results' which may not have been captured in the literature due to bias). If these diverge, it indicates that there is likely some form of publication bias or p-hacking at work.

The EDR is one of the most important numbers to consider. It is not altered by publication bias or research style; it simply takes into account the power of all the research (published or not) an experimenter has likely done, and captures how well a researcher is making sure their studies are powerful enough to meaningfully separate real effects from the noise (the likelihood of a type II error, based on effect sizes and variance of the data they're collecting).

We can also estimate a replication rate (ERR) for that researcher. The ERR captures the power of only the studies that were deemed 'significant'– in other words, if these studies were run again exactly (using the same sample sizes), the ERR captures the percentage that would once again give significant results. *This is different from the FDR*– the FDR is the percentage of 'significant' studies that we expect to represent true underlying relationships (i.e. will replicate at high power with a huge sample size). The FDR gives us a sense of the likelihood that a 'significant' study is *true*; the ERR gives us a sense of whether the experimenter's statistical techniques are underpowered/uninformative.

In order to analyze Cialdini's results, alpha needs to be moved to a lower value of 0.01, so results that are 'significant' should be considered the ones with p-values below 0.01 (instead of 0.05 as he may report).

In this case, Cialdini has an ODR of 72 and an EDR of 32– the difference in these values suggests a large publication bias. Schimmack gives him a final replication score of ERR of 56%– so a little over half of his 'significant' results should retain significance if the experiments were repeated exactly, and a FDR of 11% (remember, we tuned this FDR to be low by changing our alpha cutoff to 0.01).

### Cialdini in English

Cialdini falls right in the middle of the pack as far as social psychologists are concerned in terms of the power of his studies. (Keep in mind this is an average; it doesn't tell us the distribution of power of his work. Most of his work could be slightly underpowered studies or he could have had a few massively underpowered studies. The reliability of an individual paper depends on its details.)

Unfortunately, 'middle of the pack' isn't the greatest place to be in a field where the average paper 'statistically significant' paper will replicate only 26% of the time. (In other words, you'd be better off guessing than trusting the literature in general; the papers are informative, in that you should assume their claims are *less likely *to be true than if you were guessing).**

Really, every study needs to be assessed individually and taken at face value. It's only the very top researchers in social psychology whom we can really take at face value– for everyone else, we must proceed with caution, check the effect size, and scrutinize the study itself.

## What to Trust about Cialdini

From the analysis above, we moved the p-value of significance down to p<0.01 from the typical p<0.05 in order to avoid a large False Discovery Rate, and we saw some significant evidence of publication bias. When reading a book by Cialdini, we have to be very cautious– it's highly likely that a large percentage of the studies with p<0.05 he cites to make his points are, in fact, false. Most studies on priming are false; social psychology has a replication rate close to 25%. Cialdini's area of expertise is one of the hardest-hit subjects in the hardest-hit discipline of the replication crisis.

Despite this, we can take a look at his individual publications, and for those with p-values <0.01 and reasonable effect sizes, we can have some trust that they will hold up. As we've seen, however, his interpretations of even *good data* should all be taken with a dose of skepticism.

Cialdini is a renown, successful researcher in a field I care deeply about. He has interesting things to say, an interesting epistemological approach to his research, and good intentions. Sadly, it seems the fields that examine the influence of the environment on behavior are caught up on a narrative that people are absolute sheep– that the *subtlest of manipulations* can dramatically alter human decision making. Don't get me wrong, we can be susceptible– good sales techniques do indeed work– but a persuasive, charismatic salesperson is a *very different animal* than a wallpaper full of clouds.

*** '26% of papers with significant findings fail to replicate' can mean two things. The first meaning is in the sense that we used *'Estimated Replication Rate' *above (we re-create the study at the same sample size to see if it replicates); this isn't to say that 26% of the papers are *false*; just *underpowered*. We saw that by analyzing the p-value threshold of an individual researcher, we could adjust the *False Discovery Rate* for what we consider 'significant' results, which is what we really care about here. *

*Typically, though, a 'failure to replicate' is used with a second meaning– a real-world replication attempt at a much higher power (larger sample size). In these cases, the replication rate should converge to (1- FDR). This is the sense we're using the description 'fail to replicate' above.*