|
|
THE ADDENDUM
Connor Rolain
|
From E018: Optimizing Your Ad Creative Strategy
Insight: What exactly is creative fatigue?
Besides being generally known as bad for business and loosely tied to worse performance … very few of us in DTC have a functional definition of creative fatigue.
Here’s an example to bring it into focus.
You launch an ad or group of ads that hit targets.
Either they’re driving an in-platform ROAS or CAC that you’ve modeled for profitable, blended growth.
Or they’re performing marginally better than a comparable control metric — like a static creative test against all statics in the account or all statics in testing.
Then, those same ads fall below benchmarks.
Creative fatigue means profitable or high-performing ads become unprofitable or low-performing.
Controversy: “We just can’t top them.”
That was Cody on JRB’s best-performing ads.
And this isn’t a hard disagree.
Instead, it’s a pushback against the media buying account tactics used to find new top performers.
Launching and scaling creative tests is nuanced.
Yes, finding new creative that eventually outperforms the scale and efficiency of legacy ads is nearly impossible.
But that shouldn’t be the goal.
The goal should be to find winning creative tests … and to turn those tests into winning scaled ads.
If you launch new winners from creative testing within the same set as legacy ads, they won’t spend.
You can scale up the creative test ad set directly, but vertical scaling often hurts performance.
HexClad has campaigns for historical scalers and others for new creative testing winners.
This way, we can duplicate ads into a winners campaign that has much higher spend without them getting completely blocked out by ads we’ve been running for years.
We’re allowing ourselves to find new scaled winners while still getting legs from our legacy performers.
Regret: I wish we’d done this sooner.
Specifically, an ad account structure that allows for new winning creative tests to become winning scaled ads.
HexClad utilizes the following setup …
- Creative Testing Campaign
Where we launch all creative tests in their own ad set with minimum spend thresholds.
- Scaled Winners Campaign
Where we duplicate all winning creative tests — inclusive of historical legacy ads.
- Scaled New Winners Campaign
Where we duplicate winning creative tests — excluding scaled winners from >6 months ago.
We do this to ensure we’re still finding winning ads at scale.
As Connor Mac says, “Creatives have local maximums.”
All ads have this, but the more local maximums (i.e., creative diversity) you tap into … the better.
Shocker: More concepts, fewer variations!
When Connor Mac was at the Facebook creative event, reps told attendees they believed the best way to find creative wins was through more concepts with fewer variations.
We don’t do it this way at HexClad. Instead, we produce a lot of variants built around a single concept.
Of course, we also produce many different concepts.
But we separate concepts by ad set in our creative testing campaign with at least two variants — sometimes as many as six — from the jump.
Facebook claims it’s better to create one variant per concept and group together under one ad set.
The idea is to find the winning concept. Then, iterate to that concept’s local maximum.
Question: So, which is better?
- Organizing creative tests by concept and having multiple variants per concept?
- Mixing different creative concepts under a single ad set with only one variant per concept?
Measuring would be fairly straightforward …
First, calculate your current “hit rate.” — i.e., what % of creative tests do you launch that you consider a win?
Second, switch out your creative testing structure to the version Facebook is recommending.
Third, let it run for 1-2 months. Fourth, record and compare the new hit rate to the previous structure.
Any brave souls want to test this out?
Hit reply to let me know.