Saturday, January 19, 2013

VPC Migration: Planning

I'm looking at moving all of Lucidchart's servers into Amazon's VPC. This is no small task, nor should it be approached without a good plan and collective knowledge.

I will be recording the migration to VPC during the transition. As part of that, here are the advantages and disadvantages of moving to VPC, and my plan to do it.


Advantages and Disadvantages

Lucidchart is built on Amazon's EC2. Having never dabbled in VPC, I've been hesitant to make the transition; however, the advantages are now outweighing the drawbacks. As I see it, here are the advantages.
  1. Internal elastic load balancers. In EC2, an ELB allows traffic from everywhere on a publicly accessible IP address. This is great for web servers, but it is unthinkable for private services, like our font service, pdf service, mailing service, etc. In VPC, an ELB allows traffic from either external or internal traffic. Currently, we have our own haproxy servers that cost $400 a month that only scale manually. Their function could be replaced by two internal ELBs that would cost $36 a month and scale automatically.
  2. Virtual private network. VPC supports VPN connections using ipsec.
  3. Heightened security. Only the instances given elastic IP addresses will be accessible to the internet. This is naturally more secure than giving every instance a public IP.
  4. Changing security groups on running instances. The process of switching security groups on running instances in EC2 is heinous and requires downtime. All instances in VPC can change security groups on the fly, while still running.
I've read blogs, documentation, how-tos, post-mortems, and even spoken to a couple AWS architects about moving to VPC. There are a number of hurdles to hop over and hoops to jump through, but I think I've delved through enough of other people's experiences to take a crack at it. To summarize what I've read, here are the issues that must be known before-hand.
  1. Network communication. Because instances in the VPC reside on their own VLAN without a public IP, EC2 instances cannot naturally talk to VPC instances.
  2. Subnets, routes, and gateways. It takes more knowledge about networking to run instances in VPC than it does in EC2. Before running any instances, one must configure the subnets, routes, and gateways.

The Plan

I have around 40 to 60 m1.large instances, 2 scale groups, 12 database servers, and 3 cache servers running in EC2 at any one time. All of them need to be moved over to VPC with no downtime.

The plan is to move them, starting with the frontend instances, one by one, or scale group by scale group, until finished. Starting with the frontend (web servers) and moving backwards allows me to maintain uptime during the transition because VPC instances can talk to EC2 instances, but not the other way around.

One helpful thing that we won't be able to take advantage of is the ability to attach EC2 EBS volumes to VPC instances. All of our servers are EBS backed, so we could stop each instance, detach the root volume, start an instance in the VPC, and attach the same root volume on the new VPC instance. This would simplify the transition greatly. However, we also need to move from Ubuntu 11.10 to Ubuntu 12.04. I'm usually not a fan of changing multiple things at once, this one seems prudent since it has been on the backlog for a year.

As for a VPN, I am not going to take the time to set it up. It does not provide enough value to justify the work.

Coming Up

I intend to make various posts about the actual transition: things I was right about, things I was wrong about, and things that I didn't expect. So, stay tuned. Updated: VPC Setup

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.