Adam Jones|HomeBlog

Addressing digital harms: a right of appeal is not sufficient

Headshot of Adam Jones

Adam Jones

In the privacy and AI governance space, people often suggest that a right to appeal or object could resolve many problems.

I think this will almost never properly remedy the harm. This article explains why.

People don’t know about the processing

A lot of the time, people may be completely unaware of some processing. Despite GDPR article 12(1), there’s a lot of processing that is unfortunately opaque: and privacy policies that are supposed to provide clarity are actually just template legal documents that don’t explain anything. Regulators have in general been quite poor at enforcing the transparency provisions - action against large companies here is weak, and action against small companies is nearly non-existent.

Even when people are aware that some processing is happening, they may not know how the processing is being carried out. For example: what steps are taken to come to a decision, how automated are these steps, and what checks and balances are in place. This may make identifying and appealing mistakes much more difficult.

In theory, the right to be informed, and the right of access should help with this. However my experience is that companies regularly refuse to explain how they process data, saying doing so would go against their ‘commercial interests’. This is not an exception to the GDPR, and so the ICO will uphold complaints against companies and force them to divulge this information - but going through this process takes a lot of time and energy which likely dissuades many people.

People don’t know they can appeal or object

Even if people know enough about the processing to think it might be incorrect, they might not be aware of their ability to appeal or object to a decision.

Only half of UK and EU citizens know the right of access exists. As probably the most basic right, presumably awareness of the right to object and right to request a review of automated decision making is far lower.

To make this worse, companies often mislead people about their rights. This muddies the water further, and may make it harder for people to realise they can push for their objection or appeal.

Exercising rights has some cost

Maybe in a future world we’ll have AI assistants that know when we’d want to object or appeal things, and on their own initiative go and do so effectively.

In the world we currently live in, objecting to processing is often quite costly for the requestor. It usually involves:

  • Figuring out what processing has happened. This can be difficult, as explained above.
  • Finding contact details for the data controller. Officially GDPR articles 13(1)(a) or 14(1)(a) require this to be clearly signposted in a privacy policy. In reality, contact details are often out of date, take you to a broken contact form, or are difficult to use (for example, only providing a postal address that’s in a foreign country).
  • Writing up the objection, with reasons.
  • Usually, arguing with the company about what rights exist under the GDPR, whether certain forms must be used (they don’t), and what identification is required1.
  • Often then waiting for a long time, potentially nudging the company every now and then. And in rare cases, having to lodge a complaint with a (ineffective) supervisory authority and going through the rigmarole involved with that, or go to court (which exacerbates power imbalances).
Together, this can take a lot of time: a cost that falls on the victim. This time is not usually reimbursed even if an important decision is overturned, or the company makes this process much more difficult.2 This isn’t to say I expect companies to cover all costs all the time.3 But this does highlight an imbalance and a reason that appeals processes by default do usually leave costs on the victim.

Bad decisions, even if overturned, have consequences

Imagine you apply for a job. If you’re incorrectly automatically screened out of being considered, you might be able to appeal to get the decision corrected - but it will likely take some time until you’re informed of the decision, make the appeal, have it considered, and get the decision changed.

By the time that the decision is corrected, it may be several weeks or months later - delaying you from starting the job. This could mean you’d miss out on being able to earn income, or end up deciding to take a less favourable job with another company due to offer deadlines. These harms are almost never compensated for when challenging decisions - I could see this scenario being waved away as ‘well you have a job now anyways’.

These harms are likely to be concentrated on certain groups. Algorithms almost always have some biases against some groups - and often these biases are entrenched human biases that discriminate based on protected characteristics. Even if appeals enable people to get faulty decisions overturned, we could be concentrating the kinds of costs above to specific groups in society, most likely those who are already discriminated against. The victim should not have to shoulder further costs after proving a corporation has a discriminatory process.

What can be done about this

In general, companies should not rely on appeals or objections to make things right. It shifts the responsibility onto victims in an unhelpful way, given they have less insight into how the processing is being carried out, and challenging mistakes can be procedurally difficult. Even where appeals processes work well, they still lead to potentially significant costs for individuals, which will be likely concentrated on specific disadavantaged groups in society. In practice, this means companies should consider other methods to mitigate harms.4

Where appeals are available, ensure people are well informed. People need to both be informed about their right to appeal, as well as have sufficient information for it to be practical for them to appeal.

When people exercise their rights, consider reimbursing them. Companies should strive to make their appeals processes simple and quick. Where mistakes are identified, remeditatory actions should put victims as close to where they’d otherwise be as possible. Companies should reimburse victims for time spent getting this resolved - and at a minimum should reimburse any excess time caused by the company’s own mistakes or unnecessary extensions of the process.

Footnotes

  1. Companies often ask for copies of identity documents like a passport or driving license. Given that they don’t have these in the first place, and are unable to verify them anyways this (1) exposes the data subject to more risk, and (2) offers the data controller no more confidence that the requestor is genuine.

    If they do want to improve security, they should use their existing security procedures to authenticate the person. For example, have them log in to their online account (provided they have access to one!), send an access code to the person’s confirmed email or phone number, or confirm details the requestor might know about their data. They might want to also consider the sensitivitiy of the data being disclosed: with greater confidence in the person’s identity required for disclosing more sensitive data.

    Some also ask for ‘proof of address’. Don’t get me started…

  2. In industries which might have other complaints handling rules enforced by a regulator (like the FCA’s DISP guidelines), mistakes or poor complaints handling can result in a modest amount of compensation for the frustration, inconvenience and distress caused. NB: It is not intended as reimbursement for the time spent resolving the issue directly - meaning people will still often be left effectively out of pocket.

    However, this is rare, with most industries not being regulated in this way.

  3. Covering costs where the company makes the process more difficult than it needs to be might be sensible though. Although I appreciate defining this threshold, and agreeing a rate to compensate people at might be difficult!

  4. I’ve previously written about improving human-in-the-loop systems, and plan to soon publish an article detailing a number of concrete AI governance interventions.