Chapter 19: Obstruction

In part 2, I explained how purposefully difficult-to-use UI design (sludge) can be employed as part of an exploitative resource depletion strategy, to make users fatigued and give up trying to achieve their goals, or to soften them up prior to a bigger deception. This is exactly what the obstruction category of deceptive pattern is all about.

Obstruction in Facebook and Google’s privacy settings

When the GDPR came into effect in the EU, companies were obliged to change the way they presented their privacy options to ensure that users had the means to consent (or dissent) to proposed uses of their personal data. This consent needed to be ‘freely given, specific, informed and unambiguous’ (article 4(11)).1

On behalf of its citizens, in 2018 the publicly funded Norwegian Consumer Council (Forbrukerrådet) carried out an investigation into this practice.2 It found that Facebook and Google had used deceptive patterns in their user interfaces to ‘nudge users away from the privacy-friendly choices’. They did this by using obstruction: they made it easy to accept the privacy-invading settings and hard to reject them. You can see this in the figure below. It’s one click to ‘accept and continue’, but if the user wants to ‘reject and continue’, there is no equivalent button. Instead, they have to click the ambiguously labelled ‘manage data settings’ button, and then they have to push an ambiguous toggle to the left. Notice that the toggle is improperly labelled – the user is not clearly told whether they have successfully rejected the ad tracking.3

Screenshot of three mobile screens showing Facebook’s personal data settings in 2018. After the user clicks the prominent blue ‘Get started’ button on the first data settings screen, the next screen shows a summary of and links to Facebook’s data privacy policy; at the bottom, the ‘Accept and continue’ button shares the prominence of the earlier ‘Get started’ button, but the button to ‘Manage data settings’ is ambiguously labelled and shaded white, the same as the background colour. The final screen, reached by pressing the ‘Manage data settings’ button, shows a toggle control under the heading ‘Ads based on data from partners’ with the label ‘Allowed: Ads will be more relevant to you’. The toggle’s default setting is slid to the right, but there is no indication that sliding it to the left will prevent a user’s personal data being shared.
Facebook’s data settings (Forbrukerrådet, 2018).

Google’s approach was similar. Google required users to sign in first, and to then look for and use the privacy settings dashboard of their own volition. From there, the user could choose to opt out. Again, this is obstruction, and it is the opposite of what the GDPR regulation requires.4

Screenshot of Google’s ad personalisation settings. To the right of a heading ‘Ads Personalisation on Google Search’ and a label ‘See more useful ads when you’re using Google Search’ is a toggle control, positioned to the right; no indication of what happens if slid to the left is given. At the bottom of the screen under the heading ‘Ads Personalisation Across the Web’ and the label ‘See more useful ads on YouTube and 2+ million websites that partner with Google to show ads’ are two orange buttons labelled ‘TURN ON’ and ‘TURN OFF’  in the same font at the same weight and colour.
Google’s data settings (Forbrukerrådet, 2018).

In both examples, users were required to expend more effort and attention to opt out than to just go along with the defaults and be automatically opted in. The Norwegian Consumer Council argued this was an example of the default effect bias being employed as a deceptive pattern for commercial gain.

In 2018, the Norwegian Consumer Council filed a complaint against Google on this matter. Five years later, they are still waiting for a final decision by the Irish Data Protection Commissioner.

In 2022, ten European consumer groups filed a second complaint against Google for employing similar tactics but with a greater focus on location data and the Google account sign up process.5 As BEUC writes: ‘Tech giant Google is unfairly steering consumers towards its surveillance system when they sign up to a Google account, instead of giving them privacy by design and by default as required by the General Data Protection Regulation (GDPR).’

The hard to cancel deceptive pattern

Hard to cancel is a type of obstruction that involves businesses making it difficult for users to cancel their subscriptions. It is often paired up with a very easy and frictionless subscription experience – making it easy to join and hard to leave. When seen in this pairing, it is sometimes referred to as a ‘roach motel’ – a humorous reference to a pest control device of the same name.6

Hard to cancel by the New York Times

The New York Times has published a number of articles on deceptive patterns over the years, and it holds a progressive view on consumer rights and regulation. However, until recently, it did not extend that view to its own digital services, and it is famous for using the hard to cancel deceptive pattern.

As Twitter user @vanillatary succinctly put it: ‘this should literally be illegal […] the extra subscriptions they keep by making it annoying and time consuming to unsubscribe are clearly worth far more than the cost of hiring additional staff to handle the unsubscribe calls, which could’ve been handled 100 times more efficiently by a few lines of web code […] So some % of the NYT’s business model is based on holding on to paid customers who no longer actually consider their product worth the cost’.7

The screenshots below show an easy-to-follow subscription process on the one hand, and an obstructive cancellation process on the other. Instead of giving the user a button to directly cancel a subscription, the NYT provided instructions on contacting customer services.8

Screenshot of the New York Times subscription form. Under the heading ‘SPECIAL OFFER Unlimited access to all the journalism we offer’ and the price is a clear black button with ‘SUBSCRIBE NOW’ in white text; below the button is the message ‘Cancel or pause anytime’.
A typical New York Times subscription upsell page in November 2021, provided by Twitter user @vanillatary.
Screenshot of the New York Times ‘Cancel your subscription’ page. The text states ‘There are several ways to unsubscribe from The Times.’ Further information given is ‘Speak with a Customer Care Advocate’ with a telephone number and working hours provided; alternatively a user can ‘Chat with a Customer Care Advocate’ using an online chat service. These options to cancel are much more difficult for the user than how to subscribe.
The New York Times subscription cancellation page, provided by Twitter user @vanillatary in November 2021.

In February 2021, someone published a transcript of their NYT live chat cancellation experience, which showed the process had taken 17 minutes...

Buy the book to read more...

Since 2010, Harry Brignull has dedicated his career to understanding and exposing the techniques that are employed to exploit users online, known as “deceptive patterns” or “dark patterns”. He is credited with coining a number of the terms that are now popularly used in this research area, and is the founder of the website deceptive.design. He has worked as an expert witness on a number of cases, including Nichols v. Noom Inc. ($56 million settlement), and FTC v. Publishers Clearing House LLC ($18.5 million settlement). Harry is also an accomplished user experience practitioner, having worked for organisations that include Smart Pension, Spotify, Pearson, HMRC, and the Telegraph newspaper.