The signal and the noise in advertising

The Signal and the NoiseThe advertising and media industry needs to focus on better modelling if it’s to stand a chance of accurately predicting campaign outcomes, argues Simon Lawson.

If your experience is anything like mine, then your newsfeeds have been overflowing in recent days with advertising’s gurus making their predictions for the year ahead. I’ve noticed them more this year because they’re being made at the same time I’ve been reading Nate Silver’s book – The Signal and the Noise.

For those unfamiliar with Nate Silver, he’s a statistician who came to prominence after correctly predicting the winner in 49 out of 50 states in the 2008 US election and gained further stature after picking all 50 states in the 2012 US election. Prior to his involvement in political forecasting, he used his statistical ability to forecast the performance and changing value of major league baseball players. Think Moneyball.

In his book, the noise is the increasingly overwhelming volume of information in today’s era of ‘Big Data’, while the signal is the meaningful relationships within the data that can best help to make an accurate forecast.

There’s a lot of noise in advertising.

Consider some of the metrics we’re asked to navigate on a regular basis; sales, trial intention, market share, ad recognition, message take-out, brand linkage, net promoter, tarps, impressions, clicks, average weekly reach, average frequency, cookie windows, engagement, participation, views, likes, shares, click through rates, search volumes, affiliate table rankings, share of voice and cost per acquisition, to name a few…

If that’s the noise, what’s the signal?

Back up for a second, if the signal is the data relationships that help to make a forecast accurate; what is the subject of the forecast for which we most need the signal? In my opinion: What is the likely outcome of a campaign? Is it likely to achieve the objectives as briefed? Is it going to work?

We know this … right?

Yes and no. Advertising’s signal is a work in progress, but I believe it lays in the current efforts to improve the effectiveness of econometric and cross-channel attribution modelling. What is the demonstrable path to acquisition using your historical data sets?

Which of the touch points along the path have proven to be the most influential? What effect has your product’s position in market had on past results? What is the value of superior creative?

Finding the signal amongst the noise has to be one of the most critical, if not the most critical issue facing marketers and agencies today. Advertising options have exploded in number over the last 5 years and our ability to optimise them to achieve our objectives can appear to be struggling to keep up with an increasingly complex system.

Nate Silver writing about the period following the spread of the printing press through Europe in the 15th century:

“Meanwhile, exposure to so many ideas was producing mass confusion. The amount of information was increasing much more rapidly than our understanding of what to do with it, or our ability to differentiate the useful information from the mistruths.

“Paradoxically, the result of having so much more shared knowledge was increasing isolation along national and religious lines. The instinctual shortcut that we take when we have ‘too much information’ is to engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.”

Ok, so maybe it’s not as severe as that, but the parallels with the advertising world today appear fairly obvious. I’m reminded of the battles that regularly take place on Mumbrella between the social media disciples and the less ardent believers. Some divisions also still remain between online and offline, along with media and creative. It’s making it harder; increasing the noise.

Accurate and actionable modelling may be our best chance to find the missing signal.

It’s a way for us to rise above the noise and get closer to an objective analysis of the communications tactics that are best contributing to our objectives, to recommend an optimum weighting between them, and to forecast advertising outcomes.

It should be said that Nate does point out examples of where modelling has failed rather spectacularly: The 28% default of AAA-rated Collaterised Debt Obligations (CDOs) during the global financial crisis when only 0.12% were forecast to default. But he also points to the successes: Meteorologists have significantly improved our ability to forecast the weather over the last 30 years or so.

The book cautions us to the dangers of over-confidence and asks us to avoid falling into the trap of making definitive predictions: “You are likely to sell 1,345 vehicles this month”. Instead, a probablistic approach, to better reflect the uncertainty inherent in forecasting, is suggested: “With this plan, you have a 45% chance of selling 1,200-1,300 vehicles this month, a 30% chance of 1,300-1,450 and a 15% chance of selling <1,200.”

There are clear barriers to a world where advertising outcomes can be better forecast.

Today’s models are far from perfect, and the need to improve and update the assumptions they’re based on is constant. With an ever changing communications landscape, our models need to be built in a way that leaves room to test innovation and the new tactics that come with it. The cost of modelling can be high, and the talent mix in agencies needs to continue changing to better reflect the growing importance of advanced statistical skills. It won’t be easy, but it will be worth it.

Back to those predictions for the year ahead: I did find one that I like. Writing in Advertising Age, one commentator predicted that 2013 will be the year that agencies marry the creatives and the quants. Let’s hope he’s right.

  • Simon Lawson is a business director and communications strategist at PHD Melbourne. Follow him on Twitter: @simonislawson


Sign up to our free daily update to get the latest in media and marketing