Saturday 28 March 2009

Marketing Programme Measurement – Avoiding a crash

The latest edition of Wired had a great article on the recent financial meltdown and typical of Wired it looked at it from a different angle. The focus of their editorial was the underlying formula which allowed banks to forsake their normally risk averse behaviour and instead to pour billions into essentially high-risk investments thinking they were triple-A rated.

I won't try and relay the whole article here (you can read an extract at Wired) but essentially a formula for helping to measure risk, known as the Gaussian Copula function was created a few years ago which allowed banks to very quickly risk assess investments by looking at the price of credit default swaps (essentially insurance against an investment defaulting – the higher the price the more likely the default) rather than the lengthier and more complicated process of looking at the underlying history of defaults. The problem was that credit default swaps had only been in existence for a decade or so – a period of time when house prices kept rising – and so didn't account for what happens when things go wrong, such as a large number of people losing their jobs and defaulting on their mortgages.

In essence this formula was using an aggregate past picture of performance to rate future performance without looking at the individual detail – and we all now know the consequences.

This is not however dissimilar to measures we also use within marketing programmes.

If you look at a measure such as a Customer Satisfaction Index or the more recently fashionable Net Promoter Score, they are using peoples past experience and their reaction to this in order to help predict what to change (or not) in the future. The problem with this is that they are essentially "lagging indicators" or "reflective measures" – they report what people wanted yesterday, but not what they will want tomorrow. In the book "The Certainty Principle: How to Guarantee Brand Profits in the Consumer Engagement Marketplace" they use an interesting example to illustrate this problem with using lagging indicators to measure brand engagement by relating it to "driving down the Interstate at 75 miles an hour using only your rear view mirror to steer." They go on to point out that "it's worth remembering that two million iPhone owners were once someone's "satisfied" customer [or Promotor]" – so a lagging indicator is no real predictor of future behaviour.

Although measures of past performance such as Net Promoter Score are good and can be used to indicate areas to improve, they can be dangerous if thought of as a predictive indicator of success. Net Promoter score simply tells you how people rated your product and service previously, it is not going to tell you how they will rate the same product and service in 12 months time, and this is one of the issues with how many loyalty programmes are run.

When a brand is spending millions on an above the line campaign to essentially raise awareness and acquire customers they will spend money understanding their customers and what they want. They will then spend more money making sure the advertisement is communicating the right message, using qualitative measures like focus groups or quantative measures like copy testing. However, when looking to retain customers by running a loyalty scheme, very few brands actually take the time to ask what customers want ongoing, testing new rewards or communications and understanding how their needs or desires are changing.

They may do customer research upfront before launching the scheme, but once it is running it becomes business as usual – this is essentially like creating a TV advert today and then simply running it again in 5 years time, hoping the world (and your customers) haven't moved on.

The other issue with reflective measures like Net Promoter is they tend to be aggregate measures as they rely on customers responding to some form of survey and so will typically only represent a small percentage of the overall customer base. Worse still, they ask people what they "think" rather than looking at what they "do" and we all know that people will typically say what they think you want to hear.

Predictive measure on the other hand rely on what customers are actually doing – their exhibited behaviour – and use this information to predict changes in the future. As these use customer data, the predictive measures can be applied to all customers based on their transactions, with changes in this behaviour being visible immediately.

Using behavioural data will however only indicate that something has changed, it won't tell you why and this is where customer research comes into its own.

Using regularly customer research to understand exhibited changes in behaviour as well as to understand customers changing desires can help ensure that a loyalty programme remains competitive and attractive – continuing to retain existing customers and acquire new.

It is said that the decision making process is 30% rational and 70% emotional – so understanding the underlying reasons why customers engage – or disengage - with your brand and any loyalty programme, and the reasons for changes moving forward allows you to make sure that the programme is always meeting their needs, even as these needs change.

Had the bankers buying groups of mortgages actually understood the underlying individuals - the people struggling to buy their house - had they spoken to them about their issues then they may have realised that their triple-A rated investment was just a little riskier than it appeared.

If you're constantly looking in the rear view mirror in order to move forward, you may be better off just stopping the car for a while and asking for directions – it may help you keep on the right track.

No comments: