Most ‘big data’ marketing is a waste of time, and here’s why
Using big data to look at past trends is not the best way to work out what your customers want, argues Peter Swan of the UNSW Australia Business School in this cross-posting from The Conversation.
A passer-by happens upon a drunk searching for a lost wallet under a streetlight. With nothing in plain sight, the passer-by asks “Where did you drop your wallet?”.
“Over there,” gestures the drunk across the street, “but I’m looking here because this is where the light is.”
We often look for answers in the easiest place and not necessarily where the answer is to be found. As marketing moves from subjective art toward objective, data-driven science, are we seeing the emergence of a streetlight effect?
Are even the very best big-data driven practises guilty of asking the wrong questions of the wrong data?
Wrong from the start
Most companies turn to analytics when early growth starts to slow. The familiar refrain, “Let’s make better use of our existing data”, heralds the onset of maturity, This when the early days of triple and double-digit growth are well and truly past.
Initial questions asked of big data are typically, “Who are our best customers?” and “Which products are most profitable?”
It soon becomes clear that performance differs by region, season and a host of other factors. So, it’s not long before we want to know, “How do quarterly sales in region A compare with region B, on products X, Y, and Z?”
Next comes propensity to respond (PTR) modelling, used to classify prospects for acquisition, cross sell, churn, or fraud. Where they exist, single customer views enable an entire family of PTR models used to determine next best actions.
Competing marketing priorities soon warrant marketing mix modelling, to estimate the value of advertising spends across different channels. This naturally leads to attribution modelling, to estimate how each channel contributes to the final sale.
The current holy grail of big-data driven marketing is to offer in real time the most likely product, at the most likely price, to the most likely customer, at the most likely time, via the most likely channel.
The past doesn’t always help predict the future
But does big data and analysis make sense in the first place?
Like the drunk under the streetlight, have we been seduced into looking for the answers where it is easiest? Namely, in the data we gathered from past sales to previous customers.
Is this relevant for understanding future sales to future customers?
Nothing in the customer data gathered, or in the way it is presently being analysed, addresses the fundamental consumer desire. This to find the best available combination of price and product at the lowest search cost.
All that segmenting and clustering and PTR scoring leaves our future consumers cold, stranded, outnumbered – feeling besieged and beset upon.
Consumers are bounded rational humans optimised over generations for “fight or flight” and not for solving the multidimensional optimisation problem that is rational consumer choice.
Tasked with buying a car, my siblings, with common genetic and environmental influences, will likely arrive at different consumption choices to mine.
If those closest to me exhibit different preferences, then why are these “previous customer” strangers with no common nature or nurture to me being used to suggest products for me?
Why model the choices of thousands of people I don’t know, and who don’t know me, in an effort to suggest products to me?
No consumer identifies with the clusters or segments thrown up by maximum likelihood models. In fact this type of modelling belies the constant state of flux wrought by Adam Smith’s invisible hand, and writ large in every single consumption choice.
It is a complex and rapidly changing world we inhabit with little known by these analytical models about a customer’s current preferences and circumstances.
The circumstances of markets, like those of individuals, can change in an instant. Products sell out, forcing consumers to choose from what’s available or to wait. Products stagnate.
Promotions and discounts alter the relative attractiveness of one product compared with another, stimulating sales of one and depressing sales of another.
Individual finances wax and wane as personal circumstances alter. Each and every purchase decision is a moveable feast. Even simple choices become rapidly complicated.
It is little wonder consumers throw their hands up and head for the safe harbour of brand, or convenience, or availability.
Focus on ‘small data’ instead
The data we should be analysing − small data − is product attributes and prices which change over time. This is the data consumers – your customers and your competitors’ customers – are using when choosing.
To the extent of their ability, each consumer is assessing, comparing and evaluating the products and services on offer. These are bundles of attributes with their corresponding “shadow prices”.
Trading this attribute off against that, trying to identify the best combination of attributes with their shadow prices to suit oneself. Taking into account one’s own dynamically altering preferences over the attributes and one’s own changeable circumstances.
What you should be doing is maximising the “willingness to pay”, that is the “consumer surplus”, of your potential customers. They will then tend to choose your product in preference to that of your competitors, depending on the bundle of attributes provided by your product.
Analysing customer data to minimise the error of estimation, isn’t helping your customers to solve their problems – it is proliferating them. The manifold combinations and permutations are adding to the burden, not lightening the load.
Customers will pay you with their custom, for simply reducing their search costs.
Faced as they are with overwhelming choice, customers want up-to-date, reliable, valid and trustworthy recommendations. These embody their own personal preferences and budgets, both of which are instantly available.
A version of this article first appeared on BusinessThink.
Peter Swan is a founder of Choice Engine and owns patent rights, and is a professor of finance at the UNSW Australia Business School.
This article was originally published on The Conversation.
Read the original article.
An interesting perspective on the ‘big data’ phenomenon. Many organisations burn cash to market or optimise their proprietary algorithms or data sets for the purpose of building confidence around their conversion metrics.
The delusion many of us fall under in programmatic marketing is to think that we run ‘physics engines’, based on predictable laws of nature to determine future human behaviour. We build data stacks to the moon in the vein attempt to create an disputable law of consumption for the SUV or baby bottle market, and just when we think we have it, we’re confounded by fast diminishing returns several hours later.
However, without a scalable approach to qualitative research, containing explicit questions about intent or price point (a la Google search), then display programmatic will insist on more data points, not less, to ‘reset’ the clock when diminishing returns set in. That said, we need to be far more discriminating about the data employed, and be prepared to pay for data with high integrity.
User ID not verified.
Using large, accurate and granular data sets that tell you things about your target audience you did not know before, or worse assumed wrongly, is a good thing. Having that delivered in real-time, or close to? Even better.
And don’t target drunks.
User ID not verified.
Long read (but related).
Suggests much of the push for “big data” is to have a compelling story to tell investors, regardless of its veracity.
http://www.theatlantic.com/tec.....in/376041/
User ID not verified.
Decent read.
Marketers have overlooked residual value in their efforts for decades… and mostly based future plans and strategy on campaigns and tactics that didn’t work. Regardless of the maturity and use of Big Data or Technology, leading companies still need ‘thinkers’ to reflect and project what they want/ need as a business and how to validate that against their customer base, regardless of channels, offers, tenure etc.
If it was easy everyone would be doing it.
Kinda reminds me of this emerging need for companies to build ‘Loyalty programs’, when in fact all they want is ‘Loyal behaviour’…
Thanks for the thread Peter.
User ID not verified.
“practices” not “practises”
User ID not verified.
As someone who works in data and analytics when it comes to modelling there’s no such thing as big or small data – just data, Big and small data are buzzwords. Maybe it was just me but also I don’t understand Peter’s point. If historical data (big or not) cannot be trusted to predict the future how can small data? Any data point is by its very nature historical.
Search costs definitely can be reduced through “big data” (using Peter’s definition of data sourced from previous customers), ask Amazon and the sales that come as a result of their “recommendations”.
User ID not verified.
I agree Alex. I don’t really understand the point that’s being driven at here. Clearly there are things that can be learnt from past data (toy sales shoot up in the lead up to Christmas, there’s an increase in people going on holidays in January) that will help marketers plan and get ahead of the curve.
I’ve always hated big data as a buzzword (never heard small data, but I’m not going to take that out for dinner either). End of the day it is all just data – yeah, you might need a big database to store it all and sexy new technologies to organise and query it – but in the end you’re just gathering data points and trying to learn from them.
User ID not verified.
A great article Peter, thank you. We tell our clients that it doesn’t make sense to keep mining the past to predict the future. We tell them that not only is that not wise, it is expensive and takes a long time. Plus, you only know what your customers do with you, not what they are doing after they have left you or your brand. Which is why we tell our clients it is best instead to connect to real-time data with an intention to answer what is happening right now. We’re helping them do this by mapping Australian users on the social web – these are our clients’ prospects and customers, providing vital feedback which can inform marketing and sales programs. This is Social CRM at play – providing a more complete and up to date view of your customers and prospects. With this in mind, we would argue that big data marketing is far from a waste of time – it’s essential. The key to success is leveraging technology to make sense of the social web.
User ID not verified.
I work client side, I just got a call from bimbo agency. Oh I need visitors, bounce rate, mobile users… *rolls eyes*
You wouldnt know what to do with the data if it hit you between your the hipster glasses..
Don’t ask for metrics please
User ID not verified.
As a marketing student, I’ve been reading volumes on big data and it’s value so it’s almost refreshing to have another point of view. As Peter suggests, circumstances can change in an instant so regardless of a businesses marketing efforts and data analysis, a situational influence may having the final say in a purchase.
User ID not verified.
The best predictor of future behavior is past behavior 🙂
User ID not verified.
The value of any data to marketing is directly proportional to the marketing talent directing it’s use.
User ID not verified.
Yolo I say
User ID not verified.