Posted on: Tuesday 3rd of March 2009
For marketers facing tightly constrained budgets, demonstrating marketing ‘accountability’ and proving ROI has never been more important. Yet, despite years of debate and research into this issue, we seem to have made little progress. I think there are three reasons for this. They are all connected.
First, if an equation takes the form of “a + b = ….” and we don’t know what sits on the other side, we can never solve it, in principle. Most marketing metric take this form. Companies measure how much their marketing costs them. And they measure what benefits this marketing delivers back to them – sales uplifts, market share increases, margin improvements, and so on. But they are not measuring the customer’s metrics. They have no way of knowing if their marketing activities have added value for their customers, or destroyed value. Because of this, they cannot help but fly blind.
Second, most attempts to measure marketing effectiveness take the form of a scientific experiment: measure the ‘before’ (ideally keeping that ‘before’ in place via a control group), and then measure the ‘after’ to see what changes you have wrought. This is only scientific however, if you assume that one side of the equation (the marketer) is unilaterally triggering a change on the other side – the customer, who is 100 percent passive, just sitting there, waiting to be worked upon and changed by an outside force (as in a scientific experiment).
The model which marketers use to measure ‘effectiveness’ therefore rests on a deep assumption that we are measuring a ‘change’ that ‘marketing’ alone has brought about. The model falls apart however, if the other side is an active party doing its own thing – choosing which bits of marketing to pay attention to and respond to, for example. If the process does not take the form of ‘A unilaterally changing B’ then any attempt to measure it as if it were is not science at all. It’s pretend science. It’s adopting the form, not the content, of science. It’s a very good way of never learning.
An alternative model suggests that marketing is successful when it generates win-wins. Measuring ‘win-wins’ is very different to measuring unilaterally imposed ‘changes’. (We are back to the issue of customer metrics and unsolvable equations).
Third, most marketing activities have different effects which relate to how human minds work. For example, we now know that the ‘mere exposure’ effect means that people prefer things that are familiar to them, compared to things that are not. This has got nothing do with the actual qualities of the things themselves; it’s to do with risk. The known is less risky than the unknown, and we prefer to reduce risk.
This means that the initial effects of awareness advertising can be huge, because it delivers a double whammy: A) you cannot choose a product you are not aware of, and B) you are more likely to prefer a product that is familiar than one that is not. However, it also means that once someone has been made aware of the product, the effects of more awareness advertising can be vanishingly small. So awareness advertising can have huge effects … but only within pretty narrow and tight boundaries.
We have identified at least a dozen different such reactions between marketing ‘stimuli’ and the way human minds work. They all have different dynamics. They all operate within different effect-ranges or boundaries. Some of them deliver win-wins. Some of them generate conflict and undermine trust. Often, the same marketing ‘stimulus’ – an advertising campaign for example – includes many of these different dynamics all at the same time. Sometimes they work together, in synergy. Sometimes they cancel each other out.
To try and measure the effectiveness of ‘advertising’ as a blanket concept without understanding the effects of these sub-components is a recipe for confusion. It’s like a chemist trying to understand the nature and dynamics of a chemical reaction while not being aware of the existence of the periodical table.
Together, these three issues:
- equations that are unsolvable in principle
- metrics models that try to measure the wrong thing
- the attempt to do ‘chemistry’ without an understanding of the periodic table
mean that traditional approaches to marketing metrics are doomed to disappoint. The only possible outcome is blind suck-it-and-see empiricism. “Oh! That seemed to work. Let’s do more of that! Oh dear! It worked well last time, why didn’t it work this time?” On and on forever, without getting any closer to a real understanding.
It really is high time we moved beyond this!