AWS Orchestration / Unemployed DevOps Professionals

Now that we live in the age of ~the cloud~ it strikes me that for many software projects the traditional roles of system administrator and their more recent rebranded “DevOps” are not strictly required.

I can’t speak much to other cloud hosting platforms but Amazon Web Services really is the flyest shit. All of Amazon’s internal infrastructure has been built with APIs to control everything for many years by decree of Jeff Bezos, the CEO. This was a brilliant requirement that he mandated because it allowed Amazon to become one of the first companies able to re-sell its spare server capacity and make their automated platform services available to everyday regular developers such as myself. They’ve been in the business of providing service-oriented infrastructure to anyone with a credit card longer than most everything, and their platform is basically unmatched. Whereas before setting up a fancy HTTPS load balancer or highly available database cluster with automated backups was a time-consuming process, now a few clicks or API calls will set you up with nearly any infrastructure your application requires, with as much reliability and horsepower as you’re willing to pay for.

I’ve heard some people try to argue that AWS is expensive. This is true if you use it like a traditional datacenter, which it’s not. If you try running all your own services on EC2 and pay an army of expensive “DevOps” workers to waste time playing with Puppet or Chef or some other nonsense then perhaps it’s a bit costly. Though compared with power, bandwidth, datacenter, sysadmin and hardware costs and maintenance overheard of running on your own metal I still doubt AWS is going to run you any more. In all likelihood your application really isn’t all that special. You probably have a webserver, a database, store some files somewhere, maybe a little memcached cluster and load balancer or two. All this can be had for cheap in AWS and any developer could set up a highly available production-ready cluster in a few hours.

Us software engineers, we write applications. These days a lot of them run on the internet and you wanna put them somewhere on some computers connected to the internet. Back in “the day" you might have put them on some servers in a datacenter (or your parents’ basement). Things are a little different today.

Some time ago I moved my hosting from a traditional datacenter to AWS. I really didn’t know a lot about it so I asked the advice of some smart and very experienced nerds. I thought it would be pretty much the same as what I was doing but using elastic compute instances instead of bare metal. They all told me “AWS is NOT a datacenter in the cloud. Stop thinking like that.”

For example, you could spin up some database server instances to run MySQL or PostgreSQL OR you could just have AWS do it for you. You could set up HAproxy and get really expensive load balancers, or simply use an elastic load balancer. Could run a mail server if you’re into that, but I prefer SES. Memcached? Provided by ElastiCache. Thinking of setting up nagios and munin? CloudWatch is already integrated into everything.

Point being: all the infrastructure you need is provided by Amazon and you don’t need to pay DevOps jokers to set it up for you. AWS engineers have already done all the work. Don’t let smooth-talking Cloud Consultants talk you into any sort of configuration management time-wasters like Puppet. Those tools impose extra overhead to make your systems declaratively configured rather than imperatively because they are designed for people who maintain systems. In EC2-land you can and should be able to kill off any instance at any time and a new one will pop up in its place, assuming you’re using autoscaling groups. You are using ASgroups, right? You will be soon!

When you can re-provision any instance at will, there is no longer any need to maintain and upgrade configuration. Just make a new instance and terminate the old ones. Provision your systems using bash. Or RPMs if you want to get really fancy. You really don’t need anything else.

I’m a fan of Amazon Linux, which is basically just CentOS. I use a nifty yum plugin that lets me store RPMs in the Simple Storage Service (S3) and have instances authenticate via IAM instance roles. This is a supremely delightful way of managing dependencies and provisioning instances.

The last piece of the puzzle is orchestration; once you have all of your infrastructure in place you still need to perform tasks to maintain it. Updating your launch configurations and autoscaling groups, deploying code to your servers, terminating and rebuilding clusters, declaring packages to be installed on EC2 with cloud-init and so on. You could do all of this by hand maybe or script it, except that you don’t have to because I already did it for you!

To be totally honest, my AWS setup is pretty freaking sweet. The reason it is freaking sweet is because I listened to grumpy old AWS wizards and took notes and built their recommendations into a piece of software I call Udo – short for Unemployed DevOps.

Udo is a pretty straightforward application. It essentially provides a configuration-driven command-line interface to Boto, which is the python library for interfacing with the AWS APIs. It is mostly centered around autoscaling groups, which are a very powerful tool not only for performing scaling tasks but also for logically grouping your instances. In your configuration file you can define multiple clusters to group your ASgroups, and then define “roles” within your clusters. I use this system to create clusters for development, QA, staging and production, and then in each cluster I have “webapp” roles and “worker” roles, to designate instances which should handle web requests vs. asynchronous job queue workers. You can of course structure your setup however you want though.

Using Udo is as simple as it gets. It’s a python module you can install like any other (sudo easy_install udo). Once it’s installed you create a configuration file for your application’s setup and a Boto credentials file if you don’t already have one. Then you can take it for a spin.

The cluster/role management feature is central to the design. It makes it so you never have to keep track of individual instances or keep track of IP addresses or run any sort of agents on your instances. Finding all of your stage webapp server IPs for example is as easy as looking up the instances in the stage-webapp autoscaling group. You can easily write tools to automate tasks with this information. We have a script that allows you to send commands to an autoscaling group via SSH, which works by reading the external IPs of the instances in the group. This is so useful we plan on adding it to Udo sometime in the near future, but it’s an example of the sort of automation that would normally require fancy tools and daemons or keeping track of IPs in some database somewhere, but is totally simplified by making use of the tools which Amazon already provides you.

Udo has a few nifty features on offer. One handy command is “updatelc” – update launchconfiguration. Normally you cannot modify a launch configuration attached to an autoscaling group, so Udo will instead create a copy of your existing launchconfig and then replace the existing launchconfig on your asgroup, allowing you to apply udo.yml configuration changes without terminating your asgroup. Very handy for not having to bring down production to make changes.

Another powerful feature is tight integration with CodeDeploy, a recent addition to the AWS ops tools suite. As far as I’m aware Udo is the first and only application to support CodeDeploy at this time and I actually have an epic support ticket open with a sizable pile of feature requests and bug reports. Despite its rather alpha level of quality it is extremely handy and we are already using it in production. It allows AWS to deploy a revision straight from GitHub or S3 to all instances in an autoscaling group or with a particular tag, all without any intervention on your part other than issuing an API call to create a deployment. You can add some hooks to be run at various stages of the deployment for tasks like making sure all your dependencies are installed or restarting your app. I’d honestly say it’s probably the final nail in the coffin for the DevOps industry.

2 thoughts on “AWS Orchestration / Unemployed DevOps Professionals

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s