Close ☰
Menu ☰

Transparency: a noose around consumers’ necks?

Posted on: Monday 8th of June 2015

Sometimes the solutions people propose to problems make things worse rather than better. If we don’t watch out, calls for increased ‘transparency’ around the collection and use of personal data risk becoming a great case study.

Proponents of transparency argue that by making more information available about how their data is collected and used, consumers will be better informed, and therefore end up making better decisions: transparency is key to the holy grail of ‘informed consent’.

The problem with transparency (and informed consent) is the assumption that people are prepared to invest the time and effort that’s necessary to read all the small print we are presented with, understand it, and decide what the most appropriate response should be.

In fact (for multiple reasons analysed in-depth by behavioural economists and others) most people make most decisions using rules of thumb that cut out the cost and effort of having to research, understand, sift and weigh huge amounts of complex information. Real human beings are ‘cognitive misers’ and routinely avoid things that impose a cognitive load – such as the small print that transparency pushes at us.

 

Punished for being human

The gap between the cognitive miser reality and the assumptions behind ‘informed consent’ is enough of a problem. But it gets worse because, working on false assumptions about human behaviour, today’s rules and regulations draw unfair moral conclusions which have become enshrined in law. Here’s how the argument goes. “You had a chance to inform yourself about the terms and conditions you were agreeing. If you chose not to inform yourself, it’s your fault so you can’t complain about any detriment that may arise. After all, you ticked the box. You agreed!”

In this way ‘rational actor’ assumptions about human decision-making punish human beings for being human twice over.

1)     First, they impose a cognitive load. Transparency creates a burden of work for consumers, because they now have to read, understand and make complex decisions about what is being made transparent.

2)     Second, by assuming consumers should shoulder this burden, they enable a massive transfer of risk. If anything goes wrong, it’s now the consumer’s fault because they should have read the T&Cs and if they didn’t like it, they should have said no.

In the current environment therefore, calls for increased transparency around personal data simply risk exacerbating problems which already exist in spades.

 

Finding an answer

If transparency is so problematic what’s the solution? Less transparency? More opaqueness? Of course not.

The solution is to realise that partial solutions are not good enough; that a complete package of things may be necessary to complete the job. Increased transparency is vital, but only if it is one ingredient of a bigger process, just as flour is vital to bread-making but only really works when combined with water and yeast, and taken through the process of baking.

What sort of package are we talking about? Well, transparency is pointless without control. What’s the point of knowing something if you can’t do anything about it once you do? According to recent research from the University of Pennsylvania, 57% of American consumers don’t want to hand over data to companies, but they tick the terms and conditions anyway because they are resigned to the fact they can’t do anything to stop it. That’s what transparency without control creates: resignation, or ‘learned helplessness’ as the psychologists call it.

But there’s more, because control is pointless without value. What’s the point of having control if I can’t achieve a demonstrable benefit by exercising it? The issue here is not ‘privacy’ but purpose and value – actually being able to achieve something.

So: just as we need flour, water and yeast – all three combined – to make bread, we need all three – transparency, control and value combined – if we are to make genuine progress. Each alone is not enough and may actually make matters worse.

 

Reducing the cognitive load

We also have to remember the cognitive load side of things. Transparency has to be part of a workable process that transforms the individual ingredients (like baking). We can’t expect millions of consumers to invest millions of hours reading and understanding thousands of different privacy policies and terms and conditions. Transparency needs to be the fuel for specialist services to analyse and present the results in ways that real human beings (‘consumers’) can understand in a nanosecond. This includes embedding the results of privacy assessments in browser settings and the use of trust marks and trust frameworks (where, I ‘just know’ that I’m safe doing business with these people, because other people have checked them and verified they are fine).

There’s a simple conclusion here. Transparency is not a panacea. Indeed, pursued in isolation it can actually exacerbate rather than alleviate current problems. But like the flour in that loaf of bread, it’s an essential ingredient of a broader solution – for which new types of personal information management services (PIMS) are needed.