In defence of media agencies
Joe Frazer, managing partner, digital lead and co-founder of Half Dome writes in support of media agencies, following a recent article claiming they are too focused on price, rather than service.
A recent article about Dr. Karen Nelson-Field’s perspective on attention metrics caught my eye, and my LinkedIn post on the topic (ironically) garnered some attention itself. I thought it would be worthwhile to further elaborate on my perspective, especially regarding the competing factors that influence the adoption of attention as an industry-agreed norm, as well as the myriad of options media planners and buyers are faced with in their pursuit to maximise effectiveness for clients.
First and foremost, I respect Dr. Nelson-Field’s expertise and acknowledge the potential of attention metrics. However, I believe it’s somewhere between an oversimplification and a cheap shot to claim that media agencies’ focus on penny-pinching prevents them from understanding attention as a metric. Our top priority is always to provide the best service and deliver effective campaigns, and to do so, we must balance costs with the ability to plan, buy, and optimize for effectiveness. This entails considering various factors, including proven historical methodologies, when determining the best approach to meet our clients’ diverse goals.
I don’t know a single client, agency, publisher, or person who wouldn’t advocate for price to be an important lever in value, and I say that as someone on the front line talking to procurement teams daily about the value media can drive.
Attention metrics, while promising, are still relatively new, with smaller data sets across key channels in some cases and without years of research linked to long-term effectiveness through marketing science. It’s essential to recognise that attention metrics or any other single solution are not a silver bullet for all campaigns or for connecting media effectiveness to all marketing objectives.
The reality is that as agencies and as an industry, we must strike the right balance between adopting innovative solutions and deeply understanding and relying on established best practice and research – an area which has come along in lightspeed over the past couple of decades. This also means acknowledging the limitations of attention metrics as a new entrant into the space, and not relying on them as the sole determining factor for campaign effectiveness.
Building value into attention metrics (which are really just a number of different proprietary definitions of “attention” anyway) requires careful navigation. Dr Nelson-Field has supreme confidence in her methodology and process. I know of other companies that would say the same but have vastly different approaches. That’s fine. Over time some will rise, some will fall, the industry will unite on the ones that work and they will become valuable additions to our existing toolkit. Hopefully one day we can even transact on them. But until that happens it is okay to test and learn, be tactical in your uptake, and frankly, to be cynical in your belief that this specific measure is THE one.
To illustrate this point, I’ll share a story from my time at a large holding group, which at the time was the largest in the country, transacting around 26% of all media through its agencies. This holding group pushed a shift to planning based on a proprietary tool that aimed to maximize 1+ reach across all platforms, regardless of impact or quality (note that viewability targets were set at that time). In today’s world, we would say there was no element of “attention planning” factored in. At the time, we called it a lack of common sense.
Anyway, on the back of this new, shiny, widely accepted tool, the holding group suddenly shifted significant funds out of TV to invest in Facebook video ads, bought to a frequency cap of one.
As you can imagine, the results that followed were less than ideal.
I’m a supporter of impactful formats, media channels, and channels that garner the best attention. However, it’s reductive to claim that media agencies nowadays primarily make decisions because they are ‘penny pinchers’. It’s a 90s narrative, and we’re long past moving on from it.
The challenge for media agencies and the industry is to strike the right balance between embracing innovation and adhering to established best practice and research. As we navigate the evolving media landscape, it’s essential to keep our clients’ goals and the broader context in mind, avoiding the temptation to oversimplify complex decision-making processes. By doing so, we can continue to deliver exceptional results, maximising value, whilst being aware of cost.
Joe Frazer is managing partner, digital leader and co-founder of Half Dome.
My first thought reading the same article was that Karen doesn’t seem to understand that the focus on price doesn’t originate from media agencies it originates from client’s procurement teams.
There is a massive disconnect at the client end. They don’t seem to understand that there is an incontrivertible link between price and quality, just like in every other facet of life. The cheaper the price, the lower the quality. It’s not that hard to understand is it?
User ID not verified.
Love this! Too many people are drinking the the proprietary attention metrics kool-aid.
User ID not verified.
Great post. Attention based metrics are yet just another arbitrary metric developed assuming that all people encounter and experience media in the exact same way.
User ID not verified.
Great article. Personally I find the whole attention based metrics conversation completely baffling. Surely media has always been bought/planned/priced based on some version of attention? Otherwise pricing would be based purely on reach…. a 1000 eyeballs on a TV spot would cost the same as a 1000 eyeballs on a magazine ad. The difference in the impact they deliver (which in fairness is not all about attention) is why this is not the case
User ID not verified.
Great article. Thanks for articulating this so well. So many people like Karen espousing “attention metrics”- the experienced marketers among us recognize it as just the next thing that people are using to charge more or sell their product. #next
User ID not verified.
Dr. Karen Nelson-Field makes a big assumption (along with Joe) that when media agencies are making decisions around channel selection it is always neutral with no internal bias.
The forgotten element is that procurement departments aren’t just pushing down the cpm, but also the agency fees. So when the agency’s Commercial Director sets an internal revenue target, the easiest way to get it is through internal sales and to load the budget into the “Proprietary” planing tool which spits out the optimised media plan. The problem emerges when the plan isn’t optimised to maximising the client outcomes, rather how much budget can be allocated into the channel that delivers the best margin return for the agency which is at the detriment of the client’s business goals. We then get misguided on attention and impact vs. perceived efficiency metrics.
But, hey who cares about that, we’ve made this month’s budget, there’s a problem to deal with next month.
User ID not verified.
A great article and comments.
Where I think things are getting a bit muddled with the recent focus on ‘attention’ et. al. is that marketers are looking to find greater value with their campaign by jumping on the attention band wagon.
We have research metrics that enumerate ‘the audience’. These are estimates, and the best estimates we can provide with the given budget.
For example TV’s OzTAM provides the rating for a TV program. The data you see in the press is the ‘average minute’ for the duration of the program. That duration is based on the clock – e.g. 18:00-18:30 for a news bulletin.
OzTAM is not measuring the ads per-se, but the ad ‘audience’ is incorporated in the ‘program audience’ Think of it this way as the ‘rating’ being a meld of ‘broadcaster content’ and ‘marketer content’ … program and ads.
As is to be expected, if you dig a bit deeper you will see peaks and troughs across the program duration. Much of that variation is caused by channel changing, a ‘nature-calling’ break in the program, turning off the TV etc. Those behaviours are a sort of a de-facto indicator of attention levels.
In my experience it is almost a given that the ad-break has a lower audience than the preceding and following program content. But I think we all knew that already! What the data does not provide is how each ad performs .. i.e. ad attention. The TV ratings are not designed to provide individual ad data. When OzTAM was created some 20+ years ago it was funded by the commercial broadcasters (who own OzTAM) along with the funds from the subscribing users such as the public broadcasters, media agencies etc. A marketing information opportunity was lost.
And before I sign off, I’d like to add something that has always stuck in my memory. When metered TV ratings started I do recall an advertiser getting stuck into the Sales Director of one of the FTAs that the audience for his ad was lower than the program audience so he should get a 10% discount on the ad rates. The response was something along the lines of … we pour millions of dollars into our program, get a couple of million people watching (yes that was common the 1990s), and you put your lousy ad on and you lose a chunk of our audience, and then you want a discount … I think I will need to surcharge your ads!?!?
User ID not verified.
https://www.warc.com/newsandopinion/opinion/attention-applied-meaty-proof-in-the-field-of-attention-measurement/6211
I think this probably goes to demonstrate how ill informed Joe article is
User ID not verified.
My lord that is an ill informed and depressing perspective.
User ID not verified.
I was at WFA global marketer week as a top 50 brand marketer and to be clear Karen suggested that the tooling they’re creating is for media agencies to be able to justify quality media vs the race to the bottom. She specifically suggested that procurement should be focused out bottom line vs pushing lower CPM’s which force agencies hands. Am not sure where the author of this article has taken this from but everyone deserves a point of view.
User ID not verified.
If you had actually read any of Amplified’s work, you would know that is not true. They show, using actual interaction data, that we all behave differently and that we need to take those factors into consideration in our channel and format selection.
I am not saying that they have all the answers, or a method to implement their findings – but they definitely can’t be accused of suggesting we are all the same.
User ID not verified.
Nice one, Nicky
User ID not verified.