Apr 23, 2013

Sending Frequency: More Is Not Always Better!

Lately, I’ve heard a number of people saying something like "Send volume is the key to email success!" Basically, that means sending more email=engaging more, making more money, and doing better (whatever "better" means to you).

Their argument is simple:

  1. My data shows that the more people receive/open/click my email, the more money I make.
  2. Since I can’t magically conjure up new email addresses, I should therefore send more frequently to those email addresses I do have.

If you’ve got 10,000 addresses and every time you send to them you get 100 purchases, then by sending to these addresses twice a month instead of once, you’d expect to get 100 more purchases, right? That’s money in your pocket! Why not go for it?

And to a point, these folks are right. But it’s not that simple.

You may be able to increase your frequency to drive purchases, or you may not. Here’s why: Engagement is not independent of frequency. As you send more, engagement per campaign goes down. So in steady-state (we’ll go into this more) there’s a frequency sweet spot you need to hit. Let me explain:

Frequency and engagement are negatively correlated

Ok, the first thing we need to establish is that frequency and engagement are negatively correlated—meaning as you send more frequently, people tend to engage with each campaign less. For this study, I used click rate instead of open rate, because it’s a more honest metric, and it’s more closely tied to sales for online retailers.

MailChimp has a lot of users we can look at to study the effect of frequency on engagement, but not all have changed sending frequency a lot. Some send like clockwork, so their data is worthless in this discussion.

And it’d be wrong to combine different users’ data, since they send to different lists with different readers who have different expectations about the different content. We want to avoid these differences. So here’s the process I ended up using in my study:

  1. Pull all users in the past 2 years who’ve sent frequently (at least a couple times a month) to at least 1k active addresses. Grab the click-through rate from each of their campaigns.
  2. Calculate the sending frequency for a user at each of these campaigns. Rather than the spot frequency, I used a three send simple moving average (uncentered) to smooth out the readership’s "perceived" sending frequency, which I’m assuming lags reality just a bit.
  3. Eliminate any outliers in the data by applying Tukey’s fence per user on the frequency, click rate pairs. For example, a Black Friday campaign may result in a click spike regardless of the frequency of sending around it, so that campaign should be dropped from the analysis.
  4. Study the data for those users who have substantially varied their send frequency over time (I used the criterion that the inter-quartile range of frequencies was at least as large as the average frequency)
  5. For each remaining user, take a linear regression of frequency versus click-through rate. Study the regressions with at least a weak relationship in the data that’s statistically significant (R-squared > .2, F-test p value < .05)

Still with me?

The result of the study was stark: In every case where we got statistically significant fits, there was a negative slope to the regression line. In other words: For all users with good historical data, the more frequently they sent, the lower the individual campaign click-through rate got.

For instance, here are the results from two users:

Send more, get less engagement
Send more, get less engagement
Here's another example of declining engagement per campaign
Here’s another example of declining engagement per campaign

The Obama campaign doesn’t care about your pesky study!

Now, one may argue that this drop-off doesn’t really matter—that even with degrading engagement, you still get the most revenue by flooding inboxes. And the 2012 Obama campaign would agree with you. See again this article I referenced earlier.

I would argue that this approach only works in one particular case: Your business is OK with a myopic view of success, because there’s a drop-dead date.

In the case of the Obama campaign, they sent like gangbusters, but it didn’t matter, because once they’d won the campaign, the profitability of their email list was no longer important. In their very unique case, they needed only do this rapid sending once; they didn’t need to contemplate the stats they’d receive from their list post-election.

If you’re in that kind of business, then burn baby burn. But preferably not on our IP addresses ;-)

But for most businesses, it’s more valuable to think of your email marketing as an ongoing effort in steady-state. You’re not looking to maximize engagement by a drop-dead date, but rather to maximize engagement on an ongoing basis, measured periodically.

And given this thinking, we can use some techniques from the world of revenue management to then figure out how often a user should send.

Total engagement over a period is a concave quadratic function, not an increasing line

In light of this negative correlation between send frequency and engagement, we can figure out how many times a month a user should send using a technique that a lot of hotels, airlines, car rental companies, and more use for pricing their inventory.

The graphs above for users X and Y are very similar to demand curves in economics, where demand falls as price increases. So we can think about concepts like the "frequency elasticity of engagement." We’ll take user Y above as an example—feel free to skip down to the bottom of this section if math isn’t your thing. For an individual campaign we have (see the trend line in User Y’s graph):

click-through rate = -.08%*send frequency + 2.5%

So then, all other things being equal, the total clicks I’m going to get in a month based on my send frequency can be modeled as:

total clicks in a given monthlist size * send frequency *  (-.08%*send frequency + 2.5%)

We want to choose our send frequency to maximize total clicks in a period. This function is concave and quadratic. This means that by taking the first derivative and setting it equal to 0, we can find the optimal send frequency (AHHH!!!! Flashes of calculus…):

2 * list size * -.08% * send frequencylist size * 2.5% = 0

implies…

optimal send frequency = -2.5% / (2*-.08%) = 15 sends per month

In general, then, the optimal send frequency can be calculated from the frequency versus engagement curve as:

-intercept / 2 * slope

(Although it’s common practice in the hospitality industry to put strategic bounds on that calculation so you don’t drop prices through the floor…or in this case, send more than you care to, even if your data warrants it.)

For those of your frightened by the math here, all it’s saying is that if you’re going after total clicks per month (or some other period), then there’s a point you reach in upping your frequency where the marginal gains gotten by upping your frequency are beat out by the degradation in per-send engagement. Finding that perfect balance gives an optimum rather than turning the volume up to 11 on frequency.

Takeaways

Note that in the above calculation for the optimal send frequency, there’s one value that mysteriously drops out: list size. The optimal send frequency using that calculation makes the whole "You can spam people if your list is big enough to handle it" discussion irrelevant. No matter how big your list, there’s a more nuanced way to send than just blasting.

In the case of User Y, it’s interesting that this optimum was essentially sending every other day. There were times in history where the user had sent every day, but the frequency vs. click-through rate curve would recommend that they mitigate that behavior, given the elastic nature of their readership’s engagement. (For more reading on this approach in revenue management, you can hop over to my personal blog).

In our findings, the optimal send frequency varied by user. The optimal frequency varies based on your audience and their expectations for your content. So for your own account, if you send enough to justify such a study, go for it. If not, I’d highly recommend ignoring calls to "send send send," and find a comfortable middle ground that feels balanced between individual campaign engagement and overall periodic engagement.