Read on jed.substack.com
Read time: 5min
Welcome back to The Practical Prospector Newsletter! If you haven’t joined yet, subscribe here to receive future posts:
Agenda:
Why Outbound Fails
How to A/B Test the Right Way
Focus on the Metrics That Matter
P.S. I’m offering sequence & copywriting coaching on Hard Skill Exchange. If you’re interested, you can book time here.
Why Outbound Fails
Tell me if this sounds familiar:
You come up with a campaign idea. You write the messaging, pull a list, and start sending emails. A few days later you decide the tweak the first email because it isn’t performing well.
Then you add more leads. You tweak the email again. Eventually, you give up and start a whole new campaign.
The cycle continues…
This was me (and often still is).
Problem is, you’ll never build a repeatable cold email strategy with this approach.
Today I’ll share how you can take a more scientific approach via A/B testing (the right way) so you can crack outbound.
Outbound is a science. If you treat it that way, you can figure it out.
But most people lack the organizational skills or patience to do it.
Or, unfortunately, you may be at a company where outbound simply won’t work - due to no fault of your own.
70% of startups fail between years 2 and 5. We don’t acknowledge this fact enough.
Half the battle of outbound is choosing the right company to do it at. Some products are so in demand or have great brand awareness that it doesn’t take much to book a meeting.
On the other hand, there are companies where the Michael Jordan of sales can’t even get a reply (whoever that is).
So keep that in mind.
As Warren Buffet famously said: “It doesn't matter how hard you row. It matters which boat you get in."
With the negativity out of the way (sorry), here’s where you need to start :-)
How to A/B Test the Right Way
Before you begin outbound, you should talk to your customers, research your competitors, and understand your product’s value prop to come up with a hypothesis of who to target and what to offer them.
But you can only do so much hypothesizing. At a certain point, you need to actually test your hypothesis.
The best way to get data on what works (& what doesn’t) is from your market.
That’s why A/B testing is so key to figuring out outbound.
Most people know this. But they fail to make real progress for two reasons:
They’re A/B testing the wrong things
They’re A/B testing the wrong way
(1) How to A/B test the right things
You can A/B test literally anything. One of my SDRs once tested different email fonts.
The results? I can’t remember. Which is my point haha.
You only have so much time to A/B test. So focus on things that have the highest potential upside.
Put simply, you can A/B test different subject lines all you want but if you’re targeting the wrong person, it doesn’t matter… they won’t book with you anyway!
Here’s a list of things worth testing, ranked from highest upside to lowest (my opinion of course. Your list may look different):
High Upside A/B Tests
The personas you’re targeting
The accounts you’re targeting
Your offer/value proposition
Medium Upside A/B Tests
Your data source (i.e. where you get emails/phone numbers from)
The buying trigger you use to make your emails relevant
Follow-up/bump emails
Content/case studies
P.S. line
Low Upside A/B Tests
Time of day/day of the week
Follow up frequency
Your subject line
Your CTA
Start with the HIGHEST upside items. Then, work your way down the list once everything has been thoroughly tested one at a time (big fat emphasis on one at a time).
(2) How to A/B test the right way
As I mentioned earlier, outbound is a science.
When scientists run experiments, they have a control group and a test group. In the test group, only one factor is changed.
If the test group has too many different factors from the control group, then you won’t know which one impacted the outcome.
If you went to high school this should be obvious (I was homeschooled so I wouldn’t know).
But it’s a good reminder because the exact same concept applies to A/B testing.
Here’s how I approach it:
Step 1: Pick a minimum number of prospects you will test your hypothesis on.
I recommend at least 100 (but keep the # consistent for every test)
Step 2: Pick a minimum time period you will test it for.
I recommend at least one week (again, keep the time consistent for every test)
Step 3: Only test one thing. Keep everything else the same.
As mentioned earlier, test the highest upside items first.
Step 4: Keep track of the data
This part is critical for tracking progress. Here’s an example spreadsheet I use for this.
Focus on the Metrics That Matter
Here are the relevant cold email metrics:
Open rate
Reply rate
Booking rate
First, open rates matter. Some people say they’re a vanity metric. But if your emails aren’t getting opened, how do you expect to get a reply?
Second, cold email content seems to only talk about the first two metrics.
Which is funny because booking rate is by far the most important.
Booking rate = # of meetings booked / # of prospects contacted.
For example, say you put 100 contacts in a sequence and booked two meetings. That would give you a 2% booking rate.
1-3% booking rate is the sweet spot.
If you’re above that, increase volume. Your goal should be to see how high you can increase email volume without decreasing the booking rate by a significant amount.
This is where you should start testing AI tools to automate the process and see if the messaging really works at scale (I talked about this in a previous newsletter).
If you’re below 1%, you need to decrease volume. You’re messaging isn’t working yet and if you try to increase the volume you’ll eventually burn through your TAM (total addressable market) with crappy messaging.
Keep A/B testing the high upside items until you hit 1%. Then slowly scale from there.
Keep in mind: take these booking rate percentages with a grain of salt. They will vary depending on the market you sell to and if you sell to SMB, Mid-Market, or Enterprise.
Final Word:
I realize that following this process takes a lot of patience and intentionality. And I think that’s the real reason why cold email fails. There’s too much pressure to hit numbers this month so we take the easier path: more volume with less focus.
In my experience, this can work for a short period of time. But eventually, you burn out or you burn your TAM, and the metrics dip dramatically.
Thanks for reading,
Jed
P.S. As a paid subscriber, you have access to my private Substack community. If that’s you join here & say hello once you’re in!
You mentioned high upside tests are:
The personas you’re targeting
The accounts you’re targeting
Your offer/value proposition
Do you have examples of these?
I feel like you’d eventually run out of relevant personas to test but I could be wrong.