Chapter 32: AI, hypernudging and system-level deceptive patterns

The sudden explosion of AI tools in 2023 has received a great deal of attention from governments, regulators and tech ethicists. It is understood that AI will make misinformation and disinformation campaigns much easier. Deep fakes,1 bots,2 and a tidal wave of fake content are among the concerns.3 The impact of AI on deceptive patterns is less frequently discussed, but the themes are similar.

For example, AI could be used to assist designers in creating conventional deceptive patterns. In the same way that Midjourney4 and Dall-E5 can be used to generate images, there is a new wave of tools that can generate user interfaces from text prompts, such as Uizard Autodesigner6 and TeleportHQ AI website builder,7 as you can see in the screenshot below:8

Screenshot of Uizard’s Autodesigner interface showing a project generation screen. The user can choose a device type (mobile, tablet or web); describe the project ‘in plain English’; describe ‘a design style’ in text or by selecting keywords (such as ‘Modern’, ‘Artsy’, ‘Techy’ and ‘Hand-drawn’); and then press a button to generate the design.
Screenshot of UIzard Autodesigner

At the time of writing, this type of tool is fairly basic, but given the rapid acceleration of AI, it’s reasonable to assume that it could become widespread soon. AI tools are reliant on training data – Midjourney is trained on millions of images from the web, and ChatGPT is trained on millions of articles. Websites and apps today tend to contain deceptive patterns, so if this new wave of UI generator AI tools are trained on them, they will reproduce variations of the same sorts of deceptive patterns unless special efforts are made to prevent them from doing so. You can imagine a junior designer giving a fairly innocuous text prompt like, ‘Cookie consent dialog that encourages opt-in’ and receiving in return a design that contains deceptive patterns. A further consequence of AI automation is that design teams will probably be smaller, so there will be fewer staff to provide critique and push back on deceptive patterns when they’re created.

In 2003, Swedish philosopher Nick Bostrom conceived a thought experiment called the ‘paperclip maximiser’. The idea is that if you give an AI autonomy and a goal to maximise something, you can end up with tragic consequences. In Bostrom’s words:9

‘Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.’
– Nick Bostrom (2003)

Of course, this idea is currently science fiction, but if we swap the idea of a paperclip with pay per click – or indeed any kind of design optimisation based on tracked user behaviour – then it suddenly becomes a lot more realistic.

Some aspects of this idea have been around for years. For example, Facebook10 and Google11 give business owners the means to load a range of ad variations for A/B testing (measuring click-through rate or similar) and then to have the tool carry out automatic selection of the winner. This takes the human out of the loop, so the advertiser can press the ‘run’ button and then leave it to do the job. If they happen to check in after a few weeks, they’ll find that ‘survival of the fittest’ has occurred. The ads that people didn’t click have been killed off, and the ad that was most persuasive at getting clicks has become the winner, shown to all users.

These systems are not autonomous, thankfully: a human has to set it up, provide the design variations and it’s limited to just advertisements. With the new generation of AI tools, we can imagine this working at a grander scale, so a human would be able to give a broad, open-ended brief, press ‘go’ and leave it running forever – writing its own copy, designing and publishing its own pages, crafting its own algorithms, running endless variations and making optimisation improvements based on what it has learned. If the AI tool has no ethical guide rails or concepts of legal compliance, deceptive patterns are inevitable – after all, they’re common, easy to build, and generally deliver more clicks than their honest counterparts.

Related to this is the idea of persuasion profiling12 or hypernudging13 – a system that tacitly collects behavioural data about what persuasive techniques work on an individual or market segment, and then uses that knowledge to show deceptive patterns that are personally tailored to them. For example, if a system has worked out that you’re more susceptible to time pressure than other cognitive biases, it will show you more deceptive patterns that take advantage of time pressure. Research so far has focused on cognitive biases, but it’s easy to imagine it being extended to target other...

Buy the book to read more...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.