Tonight at midnight March 5, 12am ET (see this in your timezone), the MailChimp.com public website will be down for roughly 45 minutes. This will not affect the MailChimp application or email campaigns in any way.
Our public website and our application servers are located in 3 different data centers. If the website is down, you can still access MailChimp and send campaigns.
However, if you haven’t already done this, you might want to bookmark the MailChimp login page:
so that you can access the app servers while the public website server is down. Details after the jump if you’re curious about what’s going on.
Other server news
As we’ve recently blogged (see: MailChimp Server Expansion), we’ve completely redesigned our website, and moved it to a completely different data center. This way, whenever it experiences traffic spikes, it won’t affect the MailChimp application.
Our little MailChimp.com website is hosted at MediaTemple. They’ve just sent us an alert that they’ll be doing some server upgrades tonight at midnight, and the site will be down for up to 45 minutes.
No big deal, because the MailChimp application is running on separate servers (we refer to them as our "app servers") and those app servers are spread across two different data centers (different location, different company, different everything).
On top of that, we have multiple database servers, and multiple email delivery servers (MTAs). Those are also spread across 2 different data centers. Load balancing, which distributes traffic to the different app servers in the different data centers appropriately, is handled by yet another independent service.
We like to put our bananas in different baskets.
The idea is that if one data center has problems, we can go to the load balancing service, and re-direct traffic to the other one. One data center is in Virginia, and the other in NY. The public website is in CA.
Right now, most of our traffic is being routed to our "older" data center. They’re quite modern and advanced, but we call them "the old one" because we’ve been with them for 10 years. We’re slowly distributing more traffic to the "new" data center every week. Currently, the new one is getting around 30% of our load as we warm it up.
We’ve got more database servers and email delivery servers loaded up and ready to go online, but it’s been a slow and careful process getting them all in place, working together, and testing failover. It’s frutstrating for us, because we really want to just "flip all the switches on" particularly as we watch our customer base grow so rapidly.
But we want to get things wired together just right.