Enabling bufferbot to do parallel deployments

Background

At Buffer we do a lot of our deployments via Slack still. We are in the process of migrating to continuous deployments where merging a branch to master on Github (merging Pull requests) will trigger a deployment to production. But for some existing services like the main buffer monolith service itself, we still use a slackbot command:

@bufferbot deploy web to push code to our Elasticbeanstalk environment.

We have a few Elasticbeanstalk environments and one of them is for our API. We are also in the process of moving the API layer to Kubernetes where it will take advantage of our newer deployment process. Till it fully moves over however, I need to enable our bufferbot to deploy to kubernetes and Elasticbeanstalk in parallel when someone runs:

@bufferbot deploy api

Normally a microservice deployment via slack happens using:

@bufferbot servicedeploy <name-of-service>.

In this case the final result of this exercise will be as if someone typed in both commands at once.

Stack

We use heroku currently to host bufferbot's code since we use hubot. Hubot is useful but we will be migrating away soon to botkit. In the meantime though I have had to break out coffeescript again to do the changes which is simple enough. In pseudocode it looks like this:

//list for bufferbot deploy .... string
//store params in stringParams
targetedDeployment = stringParams[0] //this could be web, api, or any of our other environments
doStandardDeploymentStuff(targetedDeployment)

if targetedDeployment == 'api' 
  //code to call intermediate service responsible for doing the kubernetes build and deploy steps

July 19th 2018

I'm now running almost all my development on a windows machine. I use the WSL layer to run things I would normally do on Ubuntu. So far it's been pretty solid. But today I'm running into troubles. Installing the heroku CLI is proving to be quite the challenge to the point that I might just use the provided windows installer to manage things for me. I'll also need to install git for windows.

12:36 While I download git, I'll just highlight the issue that I'm having installing heroku CLI in the Ubuntu layer in windows (henceforth referred to as WSL layer).

  • Heroku's instructions for Ubuntu states to use snap to install. You can check out everthing about snapcraft if you want to know more about that.
  • But turns out that snap needs some services running in the background that need to be started by systemd which apparently does not sit well with WSL. Turns out that's quite a dealbreaker for me.
  • So I'm downloading heroku now but I'm also going to download the binaries and add them to /usr/local/bin and /usr/local/lib` to see if I can get things started.

12:47 Downloaded and installed on WSL using the binary download option. Time to login and get things setup.

  • Done and dusted. So far so good. Now to setup my local repo to be connected to the heroku remote as well and preferably I don't bust the deployment over there.

12:53 Annd the git remote is ready. Now I need to actually test my code to make sure I'm not uploading busted code. The only thing is I'm not entirely sure how to test the code in this case. The problem being that I want to see if it actually reaches the production stuff correctly. So I might have to take a chance and do a deployment and test in production. YOLO.

12:59 I do love vs code. Not that vim couldn't do this but the ease of finding and installing a plugin to do coffeescript code checks and then actually catching a problem before deploying a test branch is awesome.

13:05 Deploying. Please work.

13:07 So far so good.

13:08 So I'm going to first deploy the master branch to the version of the API running on kubernetes. I need to make sure that process is still working as intended. I'll be using bufferbot servicedeploy for that. Any subsequent builds will still be based off the same git hash for now (unless someone merges something in). We have a nifty thing in the deployment pipeline where if something has been built already, it skips the building of the docker image step and jumps straight ahead to the step which does the deployment in kubernetes. More on that in another blog post I guess.

  • Build was a success. On to testing the parallel version.

13:18 Noooo. Something is wrong. For whatever reason, the request is not going to the deployment.

13:42 After a short break we are back to it. I've added some logging.

  • Oki. Found the error in the logs. Hubot has multiple ways to make http requests. You can pass the object that contains the message in it. That has a method called message.http. Or you can pass the robot object in. The robot object is responsible for listening. That too has a robot.http method in it. When writing the code under the if condition, I copy pasted code from the servicedeploy command stuff. That uses robot to make the http requests. But I wasn't passing robot into the method.

The code looks like this:

module.exports = (robot) ->

  robot.respond /deploy\s*([.\_a-zA-Z0-9-/']+\s)(to\s+)?(.+)/i, (msg) ->
    branch     = msg.match[1] or null
    target     = msg.match[3] or null
    branch     = branch.replace /^\s+|\s+$/g, ""
    startBuild(robot, msg, target, branch) //previously this line wasn't passing in robot

  robot.respond /deploy\s*(.+)/i, (msg) ->
    target     = msg.match[1] or null
    if !/.+\s.+/i.test(target)
      startBuild(robot, msg, target) //previously this line wasn't passing in robot
  • These kinds of oddities are why we want to switch away to botkit.

13:48 One more missing reference error to fix. This wasn't somehow picked up by the linter :(

13:53 And the parallel deployments are working! But somehow deployments to Kubernetes aren't working. Scratches head. Something weird is going on. Will dig in.

  • Taking another break here for a bit

14:20 Back and digging ingot why the deployment to Kubernetes isn't working

14:34 After looking up the status of the deployment using helm status <name-of-release> I discovered that the load balancers aren't getting allocated. Looks like we may have hit our load balancer limit for the region. Darn.

14:36 Annnd we've done it! After deleting some test deployments that were sitting around using up LoadBalancers, the deployment went through succesfully. Need to test if subsequent updates will work as well.

  • It does! We are done here! :D

Posted on November 19 2018 by Adnan Issadeen