This article was originally published here.
It’s no secret that campaign management (on Facebook & Google, and even on challenger platforms like Snap & TikTok) is fast becoming a largely automated process. The app install ecosystem is the furthest ahead on the automation curve thanks to Google’s Universal App Campaigns and Facebook’s Automated App Ads (for those less familiar with the mobile app ecosystem, in both cases you essentially upload creatives and walk away).
Ad ops automation is, without a doubt, a positive trend for the industry. We often talk about these four pillars of successful, automation-supported, paid acquisition efforts:
- Media buying
- Conversion rate optimization (often via landing pages)
In this post, we want to share a fifth pillar that can be a significant point of leverage: Signal Optimization.
What is signal optimization?
In layman’s terms, every event that you fire back to an ad network via a pixel/SDK/server-to-server integration is a signal. The ad networks use those signals to guide their algorithmic targeting.
For example, in the D2C ecosystem, most advertisers rely on the Purchase event as their key signal. Their ad sets/campaigns are set to Optimize for Purchase or Value and then Facebook/Google/TikTok etc. simply work their magic to find more Purchasers as efficiently as possible.
But, of course, not all purchasers are of equal value. Ideally, you’d minimize your campaigns’ payback periods/maximize your LTV/CAC ratios by acquiring higher-value customers at an equal or lower cost. To do so, you could project an individual purchaser’s lifetime value and pass that projected LTV (pLTV) back to the ad networks as a new signal. With that new signal in hand, you could experiment with using different optimization events at the ad set level (ex: Purchasers Whose pLTV is >$100) which hopefully would result in a better payback period for your campaigns.
With this approach, you are simply providing a better feedback loop to the algorithms in real time. With stronger feedback, they should be able to easily meet your stated objectives.
How to build a pLTV model
This subject matter can become very complex, especially once data scientists are involved. But at the end of the day, the goal is to provide as accurate an LTV prediction as possible to the ad networks as quickly as possible. In my experience, something (ex: an imprecise pLTV used for optimization) is usually better than nothing (ex: optimizing for all Purchasers equally).
I’ve seen models factor in:
- First and third-party demographic data (ex: age)
- Third-party data purchased from the likes of Experian/Acxiom/TransUnion (ex: credit scores)
- Last click attributed source data (ex: utm_source=Facebook)
- Behavioral data (ex: completed level 1)
- Sign-up quiz data (ex: Do you subscribe to other meal kit services?)
- And much more….
You might even consider extremely rudimentary questions to identify high and low-value leads. For example, a real estate business might ask users: “Are you in the market for a home in the next 6 months?" It’s very likely that users that answer in the affirmative have a dramatically higher pLTV and it’s also likely that Facebook’s targeting would benefit from that direction.
Who uses this approach?
For some businesses, it’s fundamentally easier to project LTV, and it’s in those industries where you find this sort of optimization most often. In my experience, subscription eComm and gaming apps are categories where you are most likely to run into pLTV optimization events. That said, the vast majority of subscription eComm businesses that I’ve seen in the last ~12 months are not using such an approach and simply optimize for subscription/trial starts.
Notably, this tactic can be game-changing for lead generation businesses. Facebook, in particular, does an incredible job at driving leads/form fills at a low cost but the lead quality can vary tremendously. By providing a better feedback loop to Facebook, you might drastically change the types of users they target to fill out your forms.
Optimizing For LTV On The Back End Through Campaign Structure + Manual Bidding
It’s still common practice to do an analysis of campaigns and customer value and use that analysis to restructure campaigns in such a way that maximizes customer value. For example, you might notice that women are 50% more valuable than men and choose to exclude men from your targeting altogether. Or you might identify that different keywords are driving different payback periods, so you might use an Enhanced CPC strategy and set different bids on keywords based on the resultant user values.
I don’t think this approach is wrong (and in fact, it can work very well) but it certainly does not jive with where these ad networks are headed. They want you to opt-in to all placements. They want you to use auto bidding. They want you to consolidate your campaigns. I believe that attempting to optimize for LTV through account structure is akin to swimming upstream. pLTV optimization may be a happier path in that you can follow the networks’ best practices while ensuring that those networks are still aligning their delivery with your desired outcomes.
How signal optimization in digital marketing compares to similar tactics in other channels
Thinking about the direct mail ecosystem, pLTV optimization is the online version of conducting basic customer analysis and using that data to inform which addresses to mail. But, it is different from the approach you might take in direct mail/radio/OOH/TV simply because the targeting on these digital ad networks is effectively automated. With direct mail, we know exactly which addresses we're mailing when the campaign starts. With digital, we don't know whom Snapchat is targeting at the user level on any given day. It makes sense that I need to work a bit harder and in real-time to give Snapchat the feedback it requires to make my campaigns work.