Cialdini is the world's leading researcher and psychologist on persuasion and persuasive techniques.  He is famous for a really cool strategy– for a couple years, he took as many 'sales technique' classes as he could, and distilled them into some principles of persuasive practice.  It's well worth reading through his anecdotes and ideas.

While I respect much of the work Cialdini has done, in his book 'pre-suasion' he promotes some ideas that come from the widely discredited field behind priming research– that subtle environmental changes can control complex decision processes.  I think it's important to separate out the points in his book that undermine the valuable contributions he makes.  

I've discussed some of Cialdini's misinterpretations before:  he suggests that wallpaper changes how you evaluate purchasing a couch; he argues that moving toward objects makes you like them more.  The first of these is a misinterpretation of the results– people simply pretending to shop at a discount store pick couches that represent good value, and people pretending to shop at a luxury store pick couches that purport to be comfortable.  The second of these is based on methodologically weak research that almost assuredly won't replicate.

Below I'll break down a few more examples from Pre-suasion that mischaracterize the research.

Subliminal Mere Exposure

Cialdini submits research that suggests– even with ad blindness– exposure to banner ads in the periphery make you more receptive to a brand through unconscious exposure.  This is known as the subliminal mere exposure effect– that subconscious exposure can make us more 'perceptually fluent' when we encounter a percept, and thus view it more favorably because we prefer things that feel familiar or are easy to understand.

Though the subliminal mere exposure effect has not specifically been subject to a large-scale replication attempt, it shares much in common with two landmark concepts that have both failed replication– social priming (with conscious exposure), and subliminal advertising (with a long history of fraud and shoddy science).  The reasoning behind these related ideas suggest that a subtle or subconscious association with a primed concept can lead to alterations in behavior that relate to that concept.  It turns out that simply isn't true.

Subliminal mere exposure is slightly different– it suggests that subconscious exposure can lead to ease of processing, which might (barely) alter judgements of pleasantness.  

While it's a slightly weaker claim than priming or subliminal advertising, it's conclusion also run contrary to popular wisdom about ad blindness and the value of banner ads.  It's also not uncommon to find papers on the subliminal mere exposure that cite the related (discredited and severely underpowered) subliminal advertising papers.  (The most popular being the claim that 'Lipton Iced Tea can be subliminally primed, but only when people are thirsty' like this review.  The Lipton study– with sample sizes of around 30– is far too underpowered to prove the existence of subliminal advertising given it's fraudulent history, even though the likes of Scientific American still hold it up as an good evidence.  Thankfully, the BBC reproduced that exact study with 3x the people and showed there was no effect.)

Because all of the signs point to the implausibility of this effect, and because it runs counter to conventional wisdom, the burden of proof for subliminal mere exposure effects is very high.  The study presented in Pre-suasion simply isn't very empirically convincing.  It claims this effect is measurable based on small average differences in reaction time when comprehending ads that had been displayed while they were reading an article.  

The study presents three simultaneous results– (1) that people are more perceptually fluent with an advertisement (able to 'comprehend' it sooner), despite (2) being unable to recognize that they'd previously seen it; as a result (3) they view it more favorably.  

In the first case, differences in 'perceptual fluency' were less than a second between groups in a long, high variance task (~8.5 sec with variances of ~2), and the sample sizes are small (<40).  Moreover, 12% of the data was thrown away as outliers for failing the task when they buzzed too early, and these participants were more frequent in the faster groups overall (normally you'd want to run a speed-accuracy tradeoff analysis with this kind of data).  

Moreover, participants could have easily looked at the ad (they were forced to stare at the page for 45 seconds, even if they finished reading the core content), which seems to have been the only part of their visual field that was changing.  While they were unable to recall seeing the ads, this might have something to do with gist vs. specific details– participants were forced to guess which ad they had been exposed to between the target ad and a novel one; we don't know how similar or different the benchmark was.    

Most problematic is the rating for ad preference.  The same ad was rated by groups with two opposite framings– 'how negative would you rate this on a scale of 0-not negative to 9-very negative', as well as 'how positive would you rate this on a scale of 0-not positive to 9-very positive'.  In both cases, people rate the same advertisement roughly 3 bad (when asked how bad it was) and 3 good (when asked how good it is).  Pretty confusing.  Moreover, noise looks to be on the order of 0.4 in the case where the mere exposure effect doesn't change evaluations (the negative case); in the positive case, the exposure effect is on the order of 1.  I don't think we have a strong handle on what a difference along these scale actually means in the real world; whatever they mean, it's not a lot.

A further point to consider is that this kind of paper can exist with mildly consistent and surprising data that suggests a small effect, even if the data is misleading.  We know from meta-statistics about the replication crisis that when a result like this is found to be real, its effect size is usually 10 to 100 times too high.

This happens because a researcher might run several experiments like this, and one of them looks convincing– that one example will get published.  It can happen at a community level, too– several different researchers might run similar experiments, and only the one with the 'best-looking' data would end up published.  When studies are low power (have too few participants), you can expect more variability in the data each time you test a concept.  Test it enough times, and it will give you data that looks convincing.

Much of the literature on effects such as priming fall into this trap– individual papers look quite convincing, but that's because the sample sizes are small and a lot of people are working on the same concept.  Only the most convincing noise makes it past the p=0.05 cutoff.  Studies of this type are impossible to trust given the era it was written, especially against the backdrop of so many similar, disreputable findings that have themselves failed to replicate.

Cialdini presents it thusly:

Additional research has found similarly sly effects for online banner ads—the sort we all assume we can ignore without impact while we read. Well-executed research has shown us mistaken in this regard. While reading an online article about education, repeated exposure to a banner ad for a new brand of camera made the readers significantly more favorable to the ad when they were shown it again later. Tellingly, this effect emerged even though they couldn’t recall having ever seen the ad, which had been presented to them in five-second flashes near the story material. Further, the more often the ad had appeared while they were reading the article, the more they came to like it. This last finding deserves elaboration because it runs counter to abundant evidence that most ads experience a wear-out effect after they have been encountered repeatedly, with observers tiring of them or losing trust in advertisers who seem to think that their message is so weak that they need to send it over and over. Why didn’t these banner ads, which were presented as many as twenty times within just five pages of text, suffer any wear-out? The readers never “processed the ads consciously, so there was no recognized information to be identified as tedious or untrustworthy.
These results pose a fascinating possibility for online advertisers: Recognition/recall, a widely used index of success for all other forms of ads, might greatly underestimate the effectiveness of banner ads. In the new studies, frequently interjected banners were positively rated and were uncommonly resistant to standard wear-out effects, yet they were neither recognized nor recalled. Indeed, it looks to be this third result (lack of direct notice) that makes banner ads so effective in the first two strong and stubborn ways. After many decades of using recognition/recall as a prime indicator of an ad’s value, who in the advertising community would have thought that the absence of memory for a commercial message could be a plus?
Within the outcomes of the wallpaper and the banner ad studies is a larger lesson regarding the communication process: seemingly dismissible information presented in the background captures a valuable kind of attention that allows for potent, almost entirely uncounted instances of influence.

We've learned from the replication crisis that results that suggest subtle, unconscious environmental cues will change purchasing behavior in a meaningful way are almost certainly wrong.  Ad blindness is real.  There is no compelling evidence that subconscious priming affects either purchase decisions or brand receptiveness; it would be a poor decision to reject the conventional wisdom on banner ad strategy.

Becoming Valiant

To corroborate this point, Cialdini cites another bit of research that is a direct example of social priming– it suggests men who were recently asked for directions to 'Valentine Street' were more likely to be chivalrous (help a girl retrieve her phone from four menacing hoodlums) than ones previously asked for directions to 'Martin Street'.  The reported rates for helping were 12 out of 60 (Martin St) vs 22 out of 60 (Valentine St).  

Given this incredible claim–  that incidentally hearing the word 'valentine' will, for at least the next few minutes, make you almost twice as valiant in the face of a street gang (if only)– it seems much more likely that a few more large, confident, and/or single men happened to be randomly shuffled into the Valentine group as opposed to the prime.  Let's say we have 120 men, 34 of whom are are willing to fight for honor, and we randomly split them into two groups of 60 a bunch of times.  What do the numbers look like?  Here's a quick little python script to spit those numbers out:

This little script just spits out 34 chivalrous dudes randomly split between two groups of 60 (we're randomly shuffling an array of 34 ones and 86 zeros). We can see that something in the range of a 12/22 split is pretty common in this sampling (the p-value is just above 0.04).

Experimenter bias is also very possible– the first confederate was choosing the men to prime; the second confederate was attempting to elicit help.  If the first had any discretion over the men to choose, it would be hard to avoid some slight bias in selection; if the second confederate had any knowledge of the condition, she also had immense power to subtly alter the interaction.  People are very good at reading subtle social cues– these 'demand characteristics' are why it's so important to perform double blind studies.  

With such a small difference in one very underpowered test, the likely explanation here though is simply noise.  Once again, the evidence is not nearly convincing enough to reasonably believe that a vaguely romantic word will make you more likely to risk physical violence, especially when nearly identical studies do not replicate with larger sample sizes.

Fear and Loving

One misinterpreted result in Pre-suasion comes from Cialdini's own paper. In Fear and Loving in Las Vegas: Evolution, Emotion, and Persuasion, he sets out to show that when threatened or fearful (during a scary movie) we're more receptive to ads that make us want to be part of a group, called social proof ('everyone's doing it!'); when we're in a romantic or sexual mood (watching a rom-com) we're more receptive to ads that make us stand out ('you're the select few').  He states:

When we tested this idea in an experiment, the results stunned me. An advertisement we created stressing the popularity of San Francisco’s Museum of Modern Art (“Visited by over a million people each year”) supercharged favorability toward the museum among people who had been watching a violent movie at the time; yet among those who’d been watching a romantic movie, the identical ad deflated attraction to the museum. But a slightly altered ad—formulated to emphasize the distinctiveness rather than the popularity of museum attendance (“Stand out from the crowd”)—had the opposite effect. The distinctiveness ad was exceedingly successful among individuals who’d been watching the romantic film, and it was particularly unsuccessful among those who’d been viewing the violent one.

The paper includes this chart demonstrating the effect:

Figure 1 from Cialdini's peer reviewed Fear and Loving in Las Vegas: Evolution, Emotion, and Persuasion.

At a glance, the 'supercharged' effect appears real.  Unfortunately a couple of important things have been left out– there are no confidence intervals, and the axes are shortened.  These numbers actually represent the cumulative score over three, nearly identical 9-point scale questions about the advertised product ("bad/good,” “unfavorable/favorable,” and “negative/positive”).  

It turns out the plotted differences are out of twenty-seven.  On average, we're looking at a difference of about 0.5 per question on a nine point scale from 'bad' to 'good'– the three ratings are varying between 1.5 and 2 out of 9. (The real question is why are the ratings so horrible?  People either really don't like museums or they really don't like brazen ads.)

These results are from groups of ~25 people each (150 divided into six groups)– the variance will not be small.  In fact, despite a p-value greater than 0.05, they still claim their results are empirically supported: "In line with [our hypothesis] H1, fear led social proof appeals to be more persuasive than the control (F(1, 305) = 3.84, p = .051, d = .22; Msocial proof = 6.50, Mcontrol = 5.88)."

Moreover, they appear to have treated the two experiments together, despite very different scores across conditions.  It makes no sense to average raw results across experiment 1a and 1b (look again at those axes)– they were different ads for different products embedded in different contextual stimuli.  The results for museums during movies range from 4 to 6; the results for restaurants during short fictional writing range from 5.5 to over 7.  If you wanted to combine these, you should treat them as means over means and standard deviations over standard deviations, not as one large set of identical data.

By no standard should the above data be viewed as confirming the paper's hypothesis.

More Priming

Pre-suasion also points to some additional priming research to suggest neuro-linguistic programming is real and important based on Automatic Effects of Alcohol and Aggressive Cues on Aggressive Thoughts and Behaviors.  Cialdini describes it as presenting data that "alcohol cues and weapon cues automatically increased aggressive thoughts."  

Table 1 from Automatic Effects of Alcohol and Aggressive Cues on Aggressive Thoughts and Behaviors

In this experiment, people were shown pictures of alcohol-related images, weapons, and random bottles; then, their reaction time was measured as they completed a simple task, classifying words as either real or fake (fake words are things like sritter or marfle).  The real words were of two types– either 'aggressive' and 'nonaggressive'.  The theory suggests that a preceding image of a weapon or alcohol will make you faster at processing only aggressive words.

We see no meaningful difference in reaction times to aggressive words or nonaggressive words across the primes– differences of 10s of ms with huge 150ms standard deviations.  We see a small difference in how long it takes people to process aggressive words and nonaggressive words– the aggressive words are a little longer.  It takes people a while to process garbled words.  There could be a real effect based on the type of word; at first glance, the prime appears to do nothing– it's only in differences that the effect appears.

I don't want to be overly harsh about this– analyzing response-time data is hard.  Response latencies are not normally distributed– the authors log transform them (as they should), and comparing them against a baseline is the right thing to do, though these techniques are rife with interpretability issues.  Without the details, it makes it hard to trust the full analysis.  Given what we can see, it's hard to call this strong evidence for a priming effect, especially in light of numerous failed replication studies of the same underlying idea.  

Even if we were to take the data at face value, we could come up with other plausible theories for what we see (perhaps weapons make it harder to process non-aggressive primes; perhaps disjoint primes and stimuli take longer to process)– pick your pet theory (just don't publish it based on this data, that would be HARKing).  Even if any of these effects are real, we're talking about differences of 10-20ms on a reaction time task that varies over hundreds of ms just based on the stimulus type (with standard deviations of hundreds of ms, all after that log transform).  

The results are simply not meaningful.  A 10ms change in reaction time doesn't imply anything substantive about real human thought or behavior; it's certainly not strong enough evidence to suggest aggressive primes actually evoke aggressive thoughts. (In fact, most would probably argue they evoke fear, not aggression, on average.)  Moreover, an unambiguous image of a gun is quite different than a semantically ambiguous word ('bullet point').

This study is not a good justification for avoiding phrases like 'attacking a problem'.  

Final Thoughts on Pre-suasion

Persuasion is real; persuasive tactics obviously work.  People can be exceedingly gullible, as we know from the popularity of cold reading, horoscopes, and other acts of mentalism.  Building rapport and trust and using social pressure is exceedingly effective.  Conscious framing of a decision or a comparison can have impacts on the decision making process.  These effects are generally small, but they can accumulate.

The anecdotal evidence for that is clear– we are sometimes convinced to buy things we don't want.  We do sometimes get bullied by salespeople into add-ons and surveys we'd rather avoid, or find ourselves caught up in the idea of something only to have buyer's remorse.  However, coercion is not a successful long-term business strategy, and these issues aren't nearly as common as we'd expect if people were really so intensely manipulable in their daily lives.  Our heuristics for brand trust are actually quite good, and have worked well for us.

Moreover, years of research have shaped every aspect of the retail shopping environment– complete with attractive, persuasive, highly-trained and highly-motivated human salespeople. Even when we allow ourselves to shop in this completely artificial, controlled, immersive environment, it still doesn't render us incapable of making rational decisions.  If subtle words or unconscious primes could meaningfully drive our behavior, we would know it.

That's not to say that retailers can take it easy– shops without an inviting atmosphere and a strong brand, that work to display their products favorably, won't succeed.  Branding, communication, and persuasion are imperative.  But people are less susceptible than we give them credit for– their preferences are hard to predict, let alone control.  Good branding doesn't change those preferences, it appeals to them.

In any case, ethical persuasion is a natural part of every interaction we have, and our relationship to brands are no different.  Our understanding of this relationship has not shifted much in the last several decades– a lot of insightful people have lived over the last several hundred years, and everyone from Smith to Rogers to Berneys to Cialdini has had a chance to observe and theorize about the major drivers of human behavior, persuasion, and rhetoric.  

Pre-suasion is a good synthesis of some of that preceding work; unfortunately, it mixes these useful insights with flawed modern pop social psychology.  As we've seen above, anything that seems shocking and new is– regrettably– very likely to be wrong.  Against the backdrop of the replication crisis, the above selections of Pre-suasion stand on weak empirical ground.