Close ☰
Menu ☰

Personal data: does trust REALLY matter?

Posted on: Sunday 26th of July 2015

In a world where consumers say one thing but consistently do another it can be hard for brands to get their bearings.

With personal data, the ‘privacy paradox’ is now a classic example. The more consumers say they are worried about losing control of their data, the more data they seem to hand over.

Of course there are lots of issues here including degrees of consumer ignorance and resignation as well as corporate judgements about reputational risk.

But measurement problems are also a core part of this problem. It’s easy to measure the things that consumers are doing, but hard to measure the things they aren’t doing. Yet the things they aren’t doing may be more important.

For example in recent Ctrl-Shift research we found that over 30% of consumers had decided not to use a service, not to install an app, or to de-install an app or software because of data concerns. That’s a lot of lost business – losses that probably aren’t being measured.

 

The trust spectrum

In fact, there isn’t one single measure that neatly captures what matters in personal data. That’s because there are spectrum of behaviours relating to degrees of trust.

Some consumers simply won’t deal with some brands or services because they don’t trust them. These brands are almost certainly not measuring this lost business.

As our research found many more consumers are not doing particular things because of personal data concerns. Another symptom of low levels of trust is ‘dirty’ data. According to recent research by Verve, 60% of UK consumers are now providing false information to protect their privacy. That’s a big cost and a big lost opportunity that’s almost certainly not being measured.

Further along  the spectrum there is ‘wary’, pragmatic trust. If you want a delivery, you accept you need to hand over your address. If you want to pay online you need to hand over your card details. Many consumers do this, but won’t do anything more.

But many will, given the right conditions. Loyalty schemes like Tesco Clubcard and Nectar long ago made a ‘deal’ with consumers. Consumers gave them permission to collect lots of additional data in exchange for a reward, and on the condition that it wasn’t shared with third parties. Consumers trusted these brands enough to share more data. But how do you measure the difference in value between permissioned data collection and wary data exchange?

 

A new competitive playground

More recently, some practitioners have begun talking about ‘incremental permission value’: the more trust you build, the more data consumers are prepared to share with you or allow you to collect, the more you can do with it.

At Ctrl-Shift, we think this is opening the door to something entirely new. It’s opening up a new competitive battleground around trust and data sharing.

By giving consumers increasing levels of transparency and control, by treating personal data as a precious personal asset as well as a precious corporate asset, innovators are developing powerful new consumer information services that are driven by, and based on, rich data sharing. These new services offer consumers new dimensions of value allowing them to do things they simply couldn’t do before, while opening up new revenue streams for service providers.

The difference between this and consumers deciding not to do business with you is vast. And measurable, if not precisely.

Trust around personal data does matter, but not because it’s a nice warm cuddly thing without real business benefits. It matters because low levels of trust are now leading to lost opportunities, increased costs and waste (through dirty data), and because increased levels of trust enable new levels of data sharing that powers innovation.