[Update from Ben: 02/08/2011] We wrote this blog post to show that current self-reported “deliverability scores” and “inbox rates” are hard to believe. You have to take the ESP’s word for it that they get “99% to the inbox”. What we need is a truly independent scoring system that anybody can use to verify ESP deliverability claims. We thought we found that (or got pretty darn close) in ReturnPath’s SenderScore.
The first few comments we got were understandably furious. But eventually, the conversation changed. We think we were on the way to a very constructive discussion. I really enjoyed the dialogue I had with people offline as a result of all this, and I want to thank all the email companies who commented here — CritSend, CampaignMonitor, PostMarkApp, and Al Iverson’s A-1 Super Awesome Home DSL Email Service. :-) I mean, don’t get me wrong. We’re competitors. We’re not going to be singing Kumbaya around the campfire with each other any time soon. It’s just nice talking to people who know their stuff. I wish the discussion could continue.
But my patience has been worn down. ReturnPath is naggi—asking me politely to take this post down, because they “don’t want to arbitrate arguments between their partners.” [I didn't realize we were asking them to arbitrate] I suggested that, as an independent, unbiased scoring system, they should just do what I do: ignore the bastards. Actually, I suggested the complainers needed to “grow some” and that ReturnPath ought to tell them so. But that’s not how ReturnPath rolls (thankfully, I guess).
Anyway, some of the arguments we heard about our posted methodology seemed to go like this: “Your methodology is flawed, because SenderScore penalizes IP addresses that send very low volumes, and that don’t have a high reputation. For example, I have an IP address that has GREAT inbox rates (um, trust me) but that have a low SenderScore.”
Well, yes. We know that low-volume, low-reputation IPs can get great deliverability.
Buuuut we happen to think that an ESP’s very job is to send high volumes of email while simultaneously maintaining a good IP reputation. We send out tons of email through our infrastructure, 24/7. If it were a race car engine, SenderScore’s our tachometer. Does it show actual vehicle speed? No. But it’s extremely indicative of engine performance. If we see our SenderScore drop from 94 to 70, there’s a problem with the engine. It’s time to pull over and get that deliverability fixed.
So to people who say “SenderScore is a bogus number” we respectfully disagree. It may not work for all senders, but it works for ESPs. The ones who send high volume. And want to measure their reputation.
Obviously, I am not concerned with what other ESPs think, or how they respond. But personally, I think that ReturnPath’s naggi — um, polite requests for me to pull this blog post is actually going to work against them. I think they’re defending the very people who are disputing the validity of SenderScore. On the one hand, that concerns me. On the other hand, I’ve always appreciated irony. And I absolutely hate the blinking red voicemail light on my office phone.
So this awesome blog post, which attempted to highlight the usefulness of ReturnPath’s SenderScore, is now officially yanked — at the request of ReturnPath.