Chapter 6: Exploiting vulnerabilities in decision-making

If you think of the stream of information that enters your mind, you first have to perceive it, and then you have to comprehend it. I’ve explained how weaknesses in both of these areas can be exploited. After perception and comprehension occur, we then need to engage in critical thinking, or what cognitive psychologists tend to call ‘judgement and decision-making’ which can also be exploited for commercial gain.1 To quote whistleblower Christopher Wylie from his book Mindf*ck:2

‘The goal in hacking is to find a weak point in a system and then exploit that vulnerability. In psychological warfare, the weak points are flaws in how people think. If you’re trying to hack a person’s mind, you need to identify cognitive biases and then exploit them.’
— Christopher Wylie (2020)

A cognitive bias is a mental shortcut that tends to cause a systematic error in judgement and decision-making. Humans fall foul of these biases rather predictably, which led economist Dan Ariely to describe human behaviour as ‘predictably irrational’.3 Despite their shortcomings, cognitive biases are also believed to provide benefits because they provide shortcuts, ways to avoid effortful work in order to save time and energy for other more important matters. Cognitive scientist Aaron Sloman describes this as ‘productive laziness’ and explains, ‘a chess champion who wins by working through all the possible sequences of moves several steps ahead and choosing the optimal one is not as intelligent as the player who avoids explicitly examining so many cases’.4 Sloman wrote this in 1988 – no doubt he would happily refer to the web instead of chess if he were to write it today. No sensible human would read every result on Google, or every product listing on Amazon before choosing which item to click. Shortcuts are necessary to cope, so today we rely on cognitive biases more than ever, because we simply cannot process all the information we receive in detail.

There are thousands of research papers and well over one hundred types of cognitive biases proposed.5 Research on cognitive biases started to become well known in the early 2000s, entering the realms of pop psychology, business and design textbooks. The tech industry latched onto this with a great deal of enthusiasm. Some authors were very direct about the purpose of their work. In the introduction of his book Influence, Robert Cialdini refers to his area of work as ‘the psychology of compliance’ (that is, submission to demands of others) and he describes his key principles as ‘six universal weapons of influence’.6 In the book Hooked, the author Nir Eyal promotes a ‘habit-forming’ behavioural model that is nearly identical to Natasha Dow Schüll’s model of ‘ludic loops’ – except Dow Schüll describes her model as ‘addiction by design’ and presents harrowing accounts of lives destroyed by gambling.7 Eyal is careful to avoid the word ‘addiction’, but the connection is obvious.

Today, numerous websites and blogs provide guides on how to exploit cognitive biases for profit; for example, the company Convertize provides a library of cognitive biases that it cheerfully recommends as ‘A/B Testing Ideas Based On Neuromarketing’, without any mention of negative consequences for the end user, such as being tricked or trapped into unwanted transactions or contracts.8

There’s also lots of content available about cognitive biases and persuasion that proposes use in a non-exploitative manner – but it’s a very short hop from ‘use this bias to persuade in a transparent and helpful way’ to ‘use this bias to see what happens in your next A/B test’. After all, as soon as a design is tested and has statistical evidence proving it to be more profitable than the other designs, it’s very likely to be adopted by the business with little further discussion, regardless of whether users truly understand the consequences of their actions.

Default effect

The default effect is a psychological phenomenon where people tend to stick with the status quo and choose the option presented to them as the default. It’s a bias that’s been studied in many different contexts, from consumer decisions to public policy. Businesses know that people are more likely to stick with the default option, so they often define the default to be favourable to the business in some way, typically through a preselected checkbox or radio button.

One of the most famous studies on the default effect was carried out by researchers Eric J Johnson and Daniel Goldstein in the 2003 paper ‘Do Defaults Save Lives?’.9 They looked at organ donation consent rates in different countries, and they compared the countries in which users are opted out by default (shown on the left) versus countries in which users are opted in by default (shown on the right).

Chart showing the effective organ donation consent rates in several European countries. Where people are opted out by default, the consent rates vary from 4.25% (Denmark) to 27.5% (Netherlands); where people are opted in by default, consent rates are much higher, ranging from 85.9% (Sweden) to 99.98% (Austria).
Effective consent rates by country, from Johnson & Goldstein (2003).

As you can see, the difference in consent rates was enormous. A number of things are believed to drive the power of the default effect:

  • Awareness: for a user to change the default, they first have to become aware that it is possible to do so. (This harks back to the earlier section on exploitation of perceptual vulnerabilities.)
  • Effort: for the user to change from the default, they have to do something; in this case it involves finding and completing the correct government form. It is possible that citizens might intend to change their choice from the default, but not have time or energy to do so.
  • Authority bias and social proof: the default effect can be combined with other cognitive biases. For example, the default may be presented as the correct thing to do by a figure of authority (a doctor, for example). Alternatively, it may be portrayed as the thing that everyone else is doing (social proof). These are both known to be powerful cognitive biases in their own right.

In the book Misbehaving Richard Thaler did some follow up research, looking at true organ donation rates as opposed to presumed consent rates.10 He found that while presuming consent may appear to work on paper, when people die in hospitals the staff will typically ask the family whether the organs should be donated. At that point the presumed consent frequently gets discarded as there is no record of the individual’s actual choice. Thaler concluded that ‘mandated consent’ was a better policy, forcing citizens to make an explicit choice when they renew their driving licence.

The default effect has also been studied in the context of privacy and cookie consent dialogs. A large-scale study conducted by SERNAC, the Chilean consumer protection agency, provides compelling evidence.11 Over 70,000 participants were presented with different cookie consent interfaces. In one of the interfaces, participants were presented with cookie tracking as opted-in by default, while another presented it as opted-out by default. The opted-out version increased the rate of users rejecting cookies by 86 percentage points.

As you can see from the evidence, the default effect is easy to employ and is very powerful. It is often used by businesses in an exploitative way: to presume user consent for decisions where users might prefer to opt out, if they only knew the true nature of the decision they were being presented with, and were given an explicit choice.

Anchoring and framing

The anchoring effect cognitive bias is a psychological phenomenon where individuals rely too heavily on the first piece of information they receive (the anchor) when making decisions. For example, Tversky and Kahneman (1974) conducted a study in which participants were asked to estimate the percentage of African countries in the United Nations.12 They were first given a random percentage (an anchor), then asked if their estimate would be higher or lower, and then finally asked to provide their estimated figure. The results showed that the estimates of participants were significantly influenced by the anchor they were given: those given a higher anchor estimated a higher number, and those given a lower anchor estimated a lower number. This insight is frequently used by marketers in an exploitative manner when pricing consumer products – for example, where an initial price is created to be artificially high so that a discount can be presented, giving a sense of value for money.

Framing is a similar cognitive bias where individuals rely too heavily on the way information is presented rather than on the underlying facts. In 1981, Tversky and Kahneman carried out an experiment in which they gave participants a scenario relating to a hypothetical disease, and were given two treatment programmes to choose from.13 Depending on their experimental group, the outcomes of the treatment programmes were framed either positively: ‘X people will be saved’; or negatively: ‘Y people will die’. They found that the framing had a pronounced effect on participants’ choices, even though the underlying facts were identical in both cases.

In the book Predictably Irrational, Dan Ariely reported a study that demonstrates the manipulative power of this type of cognitive bias.14 He created two different fictional designs of The Economist magazine’s subscription page, and presented them to 200 students (100 per design), asking them to pick their preferred subscription type. Unknown to the participants, one of the designs contained a trick (design A, below), intended to get participants to perceive the combined print and web subscription as better value. It involved providing an extra ‘decoy’ subscription: the print magazine on its own for the same price as the print and web subscription. As you can see in the figure below, the presence of the decoy print subscription in design A caused the print and web subscription to be selected much more frequently (84% selected) than when it was omitted in design B (32% selected).

In design A, the options presented to the user were: an Economist.com subscription at a cost of $59 (selected by 16/100 participants); a print subscription at $125 (selected by no one); and a print and web subscription at $125 (selected by 84/100). In design B, the print-only subscription was removed, and 68/100 chose the web subscription at $59 and 32/100 the print and web subscription and $125.
Dan Ariely’s Economist magazine study, where the presence of a decoy option influenced participants’ decision-making.

Social proof

The social proof cognitive bias is a phenomenon in which individuals are likely to conform to the behaviour of others. It’s also known as the ‘bandwagon effect’, ‘groupthink’ or the ‘herd effect’. To put it another way, if we see that numerous other people perceive...

Buy the book to read more...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.