Maestro — Creating a Serverless Deployment Tool

Maxwell Moon
Backstage
Published in
6 min readApr 25, 2018

--

The name
In orchestral music the term maestro is synonymous with conductor, an accomplished musician with the capability of leading a large group of musicians to perform a piece (or pieces) of music. In the scenario of serverless infrastructure and SDLC, Maestro conducts the deployment of lambdas that perform their parts in making your application(s) run.

The backstory
A few years ago TuneIn’s engineering teams decided to invest in microservices and AWS. Each team moved forward at their own pace, some adding lambdas as an integral piece of their application, others using them for utility tasks like clean up processes. All in all we wound up with lambdas in .NET, Go, and Python handling a variety of responsibilities. Deployment was a mixed bag, some teams wrote scripts that wrapped AWS CLI commands, other’s simply zipped up their code and uploaded it via the Lambda console. This process hummed along happily for awhile, lambdas were updated and ran successfully, but ultimately this was not a scalable approach.

The problems
There were four big issues we were facing:

  1. No unified deployment method from team to team
  2. Heavily dependent on specific people
  3. No real capability to rollback versions in case of issue
  4. We were creating and defining lambdas manually for the most part

The solutions
I’m a fan of simple tools that do a lot of heavy lifting. I want maximum bang for my buck. The most ideal situation as a DevOps engineer is to have a tool chain so strong and easy to use you aren’t even really needed by developers. So when I set out to create a tool to solve the above four issues I really wanted to accomplish a few main things:

  1. It utilized a format that developers were comfortable with (JSON)
  2. It was easy to begin using the tool with existing lambdas or provision new lambdas (a short and simple configuration file)
  3. If we’re going to deploy changes locally, it should utilize the same steps as our CICD pipeline

JSON is optimal for configuration files because everyone is comfortable with it, having an easy to assemble configuration file means repeatability across environments, regions, or accounts (Infrastructure as Code). Simplicity also meant new lambdas weren’t going to get thrown over the wall to DevOps with a nice note saying “please configure and deploy”, creating a config file is a simple two minute task you can knock out early in development. Lastly, by creating a command line tool we could ensure that developers could use the exact same commands to deploy from their local machine, that were utilized in CICD pipelines.

To address the issue of rolling back versions, we decided that moving forward all traffic/events should hit an alias, not just $LATEST. This would allow us to push up new code, publish a version, then promote that version to the alias. If we needed to rollback it was 2 clicks away. Thanks to the way AWS handles triggers this meant we could attach a trigger to an alias and any version we assign to the alias would use the trigger immediately.

Lastly, to address the provisioning issue, it was decided that the tool should implement a ‘create’ command, this makes the config file the egg, and the code the chicken.

The tool
Once we had the problems and how we wanted to solve them, building the tool was easy. I wanted to have the lowest possible barrier of entry. Ease of use was parallel in importance to performance. Most developers spend their day in their IDE not clicking around in AWS looking for 64 character ARNs, so I wanted to abstract that complexity by doing the work for them. At the end of the day the config file came out looking like this for a basic lambda with a trigger running in a VPC.

{    “initializers”: {        “name”: “my-lambda”,        “description”: “An awesome lambda”,        “region”: “us-west-2”,        “role”: “my-lambda-role”,        “handler”: “handler.main”,        “alias”: “LIVE-1”,    },    “provisioners”: {        “runtime”: “python3.6”,        “timeout”: 150,        “mem_size”: 128,    },    “vpc_setting”: {        “vpc_name”: “my-vpc”,        “security_group_ids”: [“my-lambda-sg”]    },    “variables”: {        “key1”: “value1”,        “key2”: “value2”    },    “dead_letter_config”: {        “type”: “sns”,        “target_name”: “my-awesome-sns-topic”    },    “tags”: {        “Name”: “my-awesome-lambda”,        “environment”: “stage”    },    “trigger”: {        “method”: “cloudwatch”,        “source”: “example-10-min-cron”    }}

We decided to use python for ease of use and because Boto 3 is a fantastic and well supported/documented package. The underlying application would handle communicating with AWS APIs to handle all lambda operations, as well as retrieving things like VPC ARNs, Security Group IDs, and other information we wanted to abstract for the sake of ease of use. Since lambda expects a zip’d archive comprised of the content for the lambda, it was decided we would add a function that packages your code for you as well. Just make sure everything you need ends up in a folder called `/dist` at the same level as your config files and you’re set.

Finally we needed some commands to handle all of the actions we wanted to do. We came up with the following actions that we felt hit most of the things we wanted to do:

  • create
  • update-config
  • update-code
  • publish
  • delete
  • create-alias
  • update-alias
  • delete-alias
  • invoke
  • init
  • import

The How

We utilize Maestro in a series of steps in our build and release pipeline.

  1. Build the code
  2. Update the configuration of the lambda
- This allows us to prepare for any potential configuration changes we’ve added to the config file- Command: maestro update-config your-awesome-lambda.json

3. Update the code

- Here we push up our new package up, where it will become available as $LATEST- Command: maestro update-code --no_pub your-awesome-lambda.json- Note the `--no_pub` flag here, this ensure our code ONLY goes to $LATEST

4. Publish a new version of the lambda

- Here we publish what is currently running as $LATEST as a version, it is assigned a number by AWS, that package is unmodifiable moving forward.- Command: maestro publish your-awesome-lambda.json

5. Update the alias to use the newest published version

- Here we point the alias to the version we published in step 4. AWS steps versions incrementally so we don’t need to pass a version number into the command, we just needed to write a simple function to retrieve the largest version integer from AWS.- Command: maestro update-alias --publish your-awesome-lambda.json- Note the `--publish` flag here, this flag rolls over any manual input you would normally receive. If you remove it you will receive a print out of the current version assigned to the alias and a list of other versions, it will then ask you for the version you would like to assign. `--publish` automates the process and just assumes you want the newest version.

All in all what we solved for was reduced time and complexity. Our lambda deployments, including build time, now take around 30–45 seconds. Our developers are writing their own Maestro deployment configurations and are enabled to work as fast as they want without needing to depend on a release engineer to configure and deploy. A majority of our lambdas in our development environment are deployed automatically on merge into the development branch, so as a developer you’re seeing your code in action in under a minute. A huge improvement over the 10+ minutes it used to take to build, deploy, and validate. By getting the deployment process out of the way our developers can do what they do best, write code to help deliver the world’s best listening experience.

https://github.com/tunein/Maestro

--

--