Building an SDK for Robots

or, Building a platform agnostic cross-compiling toolchain for an ARM based robot.

In the last couple of months I’ve had the pleasure of working with a young, and very interesting robotics company here in Beijing. They are called Vincross, and they are building a robot called HEXA. HEXA taking on a Lion HEXA taking on a Lion

What’s interesting about this startup is that they are providing HEXA owners with an SDK which they can use to build their own applications for the HEXA, called Skills.

HEXA owners can then publish their Skills to the Skill Store, where they can also download other developers Skills.

A Skill consists of two parts:

  • Remote - A web application running on a mobile device used to control the HEXA remotely.
  • Robot - A Golang application running on the robot, typically where the core Skill logic lives.

When I arrived at Vincross, they had just shipped their first batch of HEXA’s to customers together with an SDK and command-line interface for Skill development.

Skill development workflow worked as such:

  1. User runs mind init and a Skill project is scaffolded.
  2. User writes some Golang code and JavaScript code.
  3. User runs mind run and code is packaged into a .mpk file which is uploaded to the robot, compiled and then executed.

The reason it had to be compiled on the robot is because the robot is using an ARM processor, while the developer’s machine most likely is using an x86 processor.

Golang supports cross compiling to ARM architecture, but abstracting away the build process of Golang applications on all the platforms supported by MIND SDK (Windows, Linux and macOS) is not trivial. Add to that, compilation of C++ libraries like OpenCV, and bindings to these libraries using SWIG/CGO, and it’s easy to see why the decision of “let’s just compile on the robot instead” makes a lot of sense.

The benefits we would reap from cross-compiling are:

  • Ability to build third-party or non-golang libs into Skills.
  • Skills can theoretically be developed in any language.
  • Shorter build times.
  • Skills source code can be proprietary and closed.

As we all know, Apple supports cross compilation of iOS applications to both the simulator running on x86, as well as to the actual phone, which is running on ARM. However, it’s easy to see why it’s not possible to develop iOS apps on Windows or Linux. Apple just doesn’t want to spend the time porting their own toolchain, dealing with the ins and outs of a 3rd party operating system, and keeping up with breaking changes when they already have their own hardware, operating system and XCode.

So how can we build a cross-compiling toolchain that will support cross compiling to x86 and ARM and at the same time be platform agnostic?

We do virtualization where it’s needed. And who does that? Docker does.

As long as we can get the whole cross-compiling toolchain working in Linux, we can ship mind as a binary, which responsibilities are very simple:

  1. To make sure Docker is installed
  2. To download the latest mindcli image
  3. To forward mind subcommands into the mindcli docker container.

So as far as platform agnostic goes, we trust that docker will provide us with that abstraction, and we pray that they do not mess up too often.

Alright, lets dig into the implementation details:

Cross compiling C/C++

The first goal was to cross-compile C/C++ applications, more specifically, we wanted to cross-compile OpenCV since it has a lot of features that are useful when you are building a robot that is suppose to visually understand the world.

We decided that, if we manage to cross compile OpenCV and get Go bindings to OpenCV working, our users should be able to do the same for any other library of their choice.

To cross compile C++ for ARM, all you need is the correct gcc cross compiling toolchain for your ARM processor. In our case, the HEXA is equipped with an ARMv7 processor with support for hardware floating point calculations. Thus, we want the arm-linux-gnueabihf version of the cross compiling tools.

FROM ubuntu:14.04
ENV CROSS arm-linux-gnueabihf
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
    unzip \
    wget \
    git \
    gcc-${CROSS} \
    g++-${CROSS} \
    cmake \
    pkg-config \
    && apt-get clean && apt-get autoremove --purge
# Setup cross compilers
ENV AS=/usr/bin/${CROSS}-as \
    AR=/usr/bin/${CROSS}-ar \
    CC=/usr/bin/${CROSS}-gcc \
    CPP=/usr/bin/${CROSS}-cpp \
    CXX=/usr/bin/${CROSS}-g++ \

We also want to install Go and set it up for cross compilation.

# Install Golang amd64
RUN wget${GOVERSION}.linux-amd64.tar.gz && \
    tar -C /usr/local -xzf ${GOVERSION}.linux-amd64.tar.gz && \
    rm ${GOVERSION}.linux-amd64.tar.gz
# Install Golang armv6l
RUN wget${GOVERSION}.linux-armv6l.tar.gz && \
    tar -xzf ${GOVERSION}.linux-armv6l.tar.gz && \
    cp -R go/pkg/linux_arm /usr/local/go/pkg/ && \
    rm -fr go && rm -frv ${GOVERSION}.linux-armv6l.tar.gz
# Configure Golang
    GOOS=linux \
    GOARCH=arm \
    GOARM=7 \
ENV PATH=${PATH}:${GOPATH}/bin:/usr/local/go/bin \

Above is a snippet of the Dockerfile in our (Open sourced cross compiler image. The full version also ensures that future packages installed with apt-get will include their respective ARM architecture version of that package, which is required when installing the build dependencies of OpenCV)

With this Dockerfile in place and built, all we have to do is to docker run it with the OpenCV source mounted into the container, install a few dependencies with apt-get, and execute cmake on the OpenCV provided cmake file as such:

$ apt-get update && apt-get install -y libavcodec-dev ...
	-DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake \
	../../.. && make && make install

After a pretty long compile time (Luckily we only have to compile once), the ${OPENCV_ARTIFACTS_DIR} now contains all of OpenCV’s dynamic libraries and header files.

Golang bindings

To generate Golang bindings for C libraries, one would typically use CGO, and when binding for C++ libraries, one would use SWIG. Writing the Golang bindings can be a pretty mundane process, and in our case we were lucky. Some cool people had already gone through the effort of writing Golang bindings for OpenCV using SWIG.

Now, to cross compile our Golang application using the Golang bindings to OpenCV, all we need to do is to docker run our container with our source mounted, tell Go how to find its dependencies…

export PKG_CONFIG_PATH="${OPENCV_ARTIFACTS_DIR}/lib/pkgconfig"

…and then compile our application as usual.

$ go build -o opencvexample opencvexample.go

Running it on the HEXA.

If OpenCV was compiled as a static library, we would just have to upload the binary to the HEXA, execute it and be done with it.

However, since we are linking against a C++ library, we now don’t have a statically linked executable anymore, and it will to try to find the shared libraries it’s depending on.

(We could build OpenCV against musl-libc instead, but since HEXA is running ubuntu 14.04, we already have glibc anyway)

But its an easy problem to solve.

  1. Pack the binary and the ${OPENCV_ARTIFACTS_DIR}/lib into a zip file.
  2. Upload the zip file to the robot and unzip it.
  3. On the robot, tell the run time shared library loader to look for libraries in our lib/ directory and execute the application.
    LD_LIBRARY_PATH=`pwd`/lib ./opencvexample

Done ! We can now cross compile C/C++/Golang applications on our PC, and pack it together for upload and execution on the robot.

However, we for sure don’t want our dear users to have to go through this whole process, so we need to provide them with some sweet abstractions:

Through the MIND Command-line interface, users have everything they need to develop *Skills for the HEXA.*

MIND Software Development Kit

Let’s start by showing a very basic example of a Skill. All it does is make the HEXA stand up.

package StandUpSkill

import (

type StandUpSkill struct {

func NewSkill() skill.Interface {
  return &StandUpSkill{}

func (d *StandUpSkill) OnStart() {

func (d *StandUpSkill) OnClose() {

As seen above, we are importing skill and the hexabody driver which we use to make the HEXA stand up using its 6 legs. These two packages are part of the MIND Binary Only Distribution Package, which previously came prebaked on the HEXA.

Since we now are compiling inside of a docker container instead of on the HEXA, we don’t need to ship the package prebaked on the HEXA. Instead, we just put it inside of our GOPATH on the cross-compiling capable container.

Let’s delete some code

All of the things that the previous CLI used to do, like entrypoint generation, packaging, uploading, installation, execution, log retrieval, communication with HEXA over websockets etc, can now be accomplished with Linux tools and shell scripts instead of thousands of lines of Golang code.

The key to this functionality is this Golang function.

func (mindcli *MindCli) execDocker(args []string) {
  cmd := exec.Command("docker", args...)
  cmd.Stdout = os.Stdout
  cmd.Stderr = os.Stderr
  cmd.Stdin = os.Stdin
  err := cmd.Run()
  if err != nil {

We can implement mind build by doing a docker run on the container with the current folder mounted, injecting some environment variables and execute the following shell script inside the container.

#!/usr/bin/env bash
set -eu
export PKG_CONFIG_PATH="/go/src/skill/robot/deps/lib/pkgconfig"
export CGO_CFLAGS="-I/go/src/skill/robot/deps/include"
export CGO_LDFLAGS="-L/go/src/skill/robot/deps/lib"
go build -o robot/skill skillexec

We can pack the Skill together as an .mpk file using zip like this:

zip -r -qq /tmp/skill.mpk manifest.json remote/ robot/skill robot/deps robot/assets robot/deps

And then serve the .mpk file to the HEXA using Caddy.

#!/usr/bin/env bash
set -eu
cat >/tmp/Caddyfile <<EOL${SERVE_MPK_PORT}
root .
rewrite / {
        regexp .*
        to /${MPK}
caddy -quiet -conf="/tmp/Caddyfile"

In addition, all of the websocket logic was rewritten and simplified using The WebSocket Transfer Agent

$ echo "hello hexa" | wsta ws://my.hexa

As I mentioned earlier, the only thing the MIND CLI has to do is forward subcommands into the docker container. The only exception is scanning the local network for HEXAs.

When scanning the network for HEXAs, the CLI will send UDP packets to the networks multicast address and wait for a UDP packet to be sent back by the HEXA containing its name and serial number. When doing this we can not NAT to the docker container since it would cause us to lose the packet source address. (Maybe we will run the container on host network in the future)

Wrapping it all up

The MIND SDK consists of the following parts:

  • XCompile Docker Image - An image preconfigured for cross compilation of C/C++ and Golang code to ARM architecture.
  • MIND Binary Only Distribution - Used by the Skill to interface with the HEXA hardware.
  • MIND JavaScript SDK - Used by the remote part of the Skill to talk to the the HEXA.
  • Templates and shell scripts used to generate the Skill main entrypoint.
  • Makefiles and shell scripts used to compile and pack a Skill with its 3rd party dependencies and assets into an mpk file.
  • Scripts to upload, install and execute Skills on the HEXA.
  • Scripts to retrieve logs and communicate with the HEXA in realtime using websockets.

All of the parts listed above go through different build pipelines, to finally be packaged into a single docker image published on docker hub

In front of this docker image stands the MIND Command-line Interface abstracting away all of the docker commands.

Since Docker is providing the host operating system abstraction layer, we had, after getting it to run on macOS and Linux, close to 0 issues getting the whole toolchain working in Windows, both with and without Hyper-V.

Here is an example showing how a user would go about developing a new Skill for the HEXA using the SDK.

$ mind scan Susan Catherine Andy
$ mind set-default-robot Andy
$ mind init HelloWorldSkill
$ cd HelloWorldSkill
$ vim robot/src/HelloWorldSkill.go
# do some coding
$ mind build && mind pack && mind run
Installation started
Uploading 0%
Uploading 21%
Uploading 42%
Uploading 65%
Installing 80%
Installation successful !
Point your browser to: http://localhost:7597
Connected !
Battery: 100% [Charging]

To use OpenCV inside a Skill, we can create a simple Makefile or shellscript for building OpenCV:

apt-get update && apt-get install -y libavcodec-dev ...
	-DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake \
	../../.. && make && make install

and build OpenCV inside the cross compiling container by executing mind x make, followed by copying the generated libraries and headers into the robot/deps folder before building the Skill.

$ cd OpenCV
$ mind x make
$ cd ..
$ cp -R OpenCV/artifacts/lib OpenCV/artifacts/include robot/deps/ 
$ mind build 

And lastly, by executing a mind upgrade, the latest version of the MIND SDK container will pulled down from docker hub.

It’s open source !

We opensourced the whole MIND Software Development Kit on GitHub and hope that it will be useful to HEXA owners as well as other robotics developers.

If you have any comments or suggestions please feel free to post them in the comment section below.

See you next time!

Monorepo, Shared Code and Isolated Fast Docker Builds

Docker does not make it easy for those who want to do isolated builds of separate applications using shared code in a monorepo.

There are probably many ways to solve it, but for me, finding a way that works in a consistent way for all of the projects and languages in our code base was not trivial. Here I’m going to present a solution that works for us at Traintracks.

This solution is agnostic to language, package manager, build system, project hierarchy and can be implemented in the same way throughout your whole stack. (Please do comment if you notice a case where it’s not)

So here it goes!

Cached dependencies

If you’ve ever used Scala and SBT, you probably know that you’ll have enough time to grow and cut your toenails (might even start eating them) in between builds if your build cache gets reset at each build.

The immutable nature of docker plus the fact that SBT does not have have a package.json or a requirements.txt file like npm/pip means that we can’t cache our dependencies easily.

Every time we update some code we are back to 0 because the downloading of dependencies and building of code happens in the same step.

Build containers to the rescue?

It goes pretty much like this.

  1. You create a container with all the tools to build your application.
  2. You run the container and tell it to build your application with your project folder mounted into a folder in the container.
  3. You execute your build inside of the container and everything is persisted on your host for your next build.

All good? not really, unless you also mounted your ~/.m2 or ~/.ivy2 folder or redirected them to somewhere else and also don’t mind keeping the same build artifacts shared between your host and docker container.

Adding to that, if you are in Vagrant and share your workspace volume with your host and have not set up NFS then be prepared for really slow build times.

Besides, you still want to have your static dependencies cached away and separate from your dynamic dependencies so that your team’s code can be built by all engineers regardless of how broken the internet is at that point. This is particularly relevant if you are behind a corporate firewall or in someplace with internet connectivity issues.

That means that your build container needs to already come shipped with the third party dependencies required before we execute the build in it.

To summarize, we need to do an initial build of the application inside the container before it can act as a pre-cached build container. As dependencies update the build container will be rebuilt.

Let’s continue to the next requirement.

Shared code

Maybe you made a nice library with some transformations that you want to use both in your data ingestion app and in your query application. On top of that, maybe one of the engineers on your team enjoys sitting in IntelliJ with all the Scala projects open in the same workspace, modifying the shared library code and recompile both of his projects from within the IDE.

How do we build individual applications isolated when they have shared dependencies above themselves in the project hierarchy?

Lets imagine a monorepo and try to figure out how to build coolapp and awesomeapp that both share the dependencies lib1 and lib2. We are going to use Golang for this example instead of Scala (for simplicity) but the same concepts apply.

├── coolapp
│   ├── coolapp.builder.dockerfile
│   ├── ...
├── awesomeapp
│   ├── awesomeapp.builder.dockerfile
│   ├── 
├── lib1
│   └── ...
└── lib2
    └── ...
└── i_am_too_fat_for_your_build_context
    └── ...

We can’t just execute docker build -t coolapp . inside of coolapp because lib1 and lib2 are outside of it’s context.

However, we can move the context up one directory and specify the dockerfile like this.

$ docker build -t coolapp -f coolapp/Dockerfile . 

We are getting there. but wait, there is a folder that says its too fat for your docker context and we are not even depending on it.

What if we have so many projects in this repo that the size of the build context we send to docker ends up being a huge build time bottleneck?

Typically we would add a .dockerignore file that tells docker which files to ignore when uploading the context but that won’t work here since what we want to ignore is conditional (depending on which app we are building).

So what we need to do is to cherry pick our build context and send it to docker (Note that we’re using GNU Tar and not BSD Tar).

$ tar -zcf - ../lib1 ../lib2 | docker build -t coolapp-builder -f coolapp/coolapp.builder.dockerfile

GNU Tar also takes –exclude-from-file where you can pass a .gitignore or a .dockerignore. Note that .gitignore have expansion rules not supported by Tar so you are either gonna have to tar dependencies individually and concatenated, ask git for the relevant files or align to a unified ignore pattern across your libraries.

Lets have a look at the Dockerfile in coolapp.

FROM golang:1.6
RUN apt-get update && apt-get install -y rsync
ADD . /go/src/traintracks/
WORKDIR /go/src/traintracks/coolapp
RUN go get ./...

Lets build the container, jump into it and look at the content of our GOPATH.

$ docker exec -it coolapp-builder bash
$ tree $GOPATH
|-- bin
|   |-- coolapp
|-- pkg
|   |-- linux_amd64
|       |--
|       |  |-- Sirupsen
|       |       |-- logrus.a
|       |- traintracks
|           |-- lib1.a
|           |-- lib2.a
|-- src
    |   |- Sirupsen
    |       -- logrus
    |           |--  ...
        -- traintracks
        |-- coolapp
        |   |-- coolapp.builder.dockerfile
        |   |-- coolapp.dockerfile
        |   |-- coolapp.go
        |-- lib1
        |   -- lib1.go
        --- lib2
            -- lib2.go

It has downloaded our dependencies from the internet and also built coolapp-builder with our cherry picked dependencies.

Now we have edited a line of code in lib1 and want to rebuild coolapp. We are going to execute the container with the build context mounted to /mount and tell it to make an rsync between /mount and the corresponding folder in the GOPATH.

$ rsync -auiv --filter=\":- .gitignore\" /mount/ /go/src/traintracks/

Remember what I said about .gitignore passed into Tar, the same applies here

Now we just have to build the app again with a go get ./... and unless you have new internet dependencies since last build the build will be as fast as your CPU and disk.

Final step is to copy our artifacts to somewhere in the mounted folder.

$ cp -v /go/bin/coolapp  /mount/coolapp/output/

Back on our host we can inspect the folder again

├── coolapp.builder.dockerfile
├── coolapp.go
└── output
    └── coolapp

So there is your coolapp binary ready for you to throw it into a plain linux container without any builds tools or source code. This will keep your containers lean and will avoid potential leakage of code.

coolapp.dockerfile might look something like this

FROM ubuntu:14.04
ADD output/* /usr/local/bin
CMD coolapp
Good ol’ Makefiles

That was a lot of steps and it might seem like a very troublesome process but actually we can wrap all of it in this Makefile and work ourselves towards a generalised solution that will work for all of our projects.

I have created an example repository that you can clone and try out.

$ git clone
$ make builder   # Creates the builder container
$ make build     # Builds project using builder container
$ make runner    # Creates the runner container
$ make run       # Runs coolapp
$ make all       # Runs all of the previous steps
$ make           # Runs all targets except builder

To summarise what all of this gave us.

  • Pre-cached dependencies without a requirements file.
  • Separation between build/run containers.
  • No dirty artifacts on host.
  • Support for a project hierarchy of your choice.
  • Fast builds on shared disks in Vagrant.
  • A unified build system for all your applications.

If you think you might have a better solution than what I presented here or have some cool improvements please leave me a comment ! I’m more than happy to learn how others have tackled these problems.

Building a Devbox with Packer, Vagrant and Ansible

In the previous article Safeguarding your deployments with packer we explained in theory how we can use Packer to achieve immutable server configurations.

At Traintracks, we not only use Packer for server deployments but also for our development environment.

There are many benefits to this such as:

  • Every engineer’s development environment is the same.
  • New engineers can start being productive from day one.
  • What works on my machine will work on any other engineers machine.
  • What works on my machine will (probably) work in production.
  • Development environment is host operating system agnostic (Even works for windows users).

When using Packer for server deployments you want to keep all of your server configurations as immutable as possible. However, for a development environment, it’s just not practical to throw away your devbox and build a new one every time something in the dev environment has been updated.

Instead of only optimising for immutability and consistency we also need to optimise for efficiency (developer hours cost more than computer hours).

This is why we are gonna bring in two new concepts here:

  • Static dependencies (Dependencies that do not get updated very often, eg. operating system, system packages, third party software like docker, ansible, git, curl etc).
  • Dynamic dependencies (In-house tooling and configuration files that are constantly iterated on)

We are going to use Packer to pack all of our static dependencies and Ansible to provision our dynamic dependencies inside of Vagrant.

A simple example to clarify what I mean:

At Traintracks we have a remote working culture but most of our engineers are in Beijing.

That means that everything that requires free and fast access to the greater internet goes into our static dependencies (Packer). Third party installation scripts might be pulling from Amazon S3 (blocked in China).

Kubernetes is downloaded from google servers, which means it is also blocked.

Due to internet connectivity and speed limitations we want these types of dependencies to be downloaded and configured once and then distributed to all the team members without anyone having to jump on a VPN to download software dependencies.

Of course we could host these dependencies on our own servers and we very often do but for dependencies that are not being changed a lot (our static dependencies) we prefer to grab them directly from the correct source once, and distribute everywhere just like we do for our production servers.

So, enough talking and let’s get to it!


  • Packer 0.10 or above
  • Vagrant 1.8.1 or above.
  • Ansible 2.0 or above.

Assuming you’re on a mac and use homebrew:

$ brew cask install virtualbox
$ brew cask install vagrant
$ brew install packer
$ brew install ansible

Packer (Static dependencies)

We have prepared a boilerplate for a Packer configuration that is very similar to the one we use at Traintracks that we will use as our base.

This boilerplate will give you a box containing:

  • Ubuntu 16.04
  • VirtualBox Guest Additions
  • Docker, kubectl and kargo
  • git, wget, curl, vim, zsh, htop, tmux, ntp
$ git clone
$ cd devbox

Lets start by inspecting the packer folder

├── ansible
│   └── playbook.yml
├── devbox.json
├── files
│   └── motd
├── http
│   └── preseed.cfg
└── scripts

devbox.json is the file that explains to packer how to build the devbox, which files to copy and which scripts to run. You can also add provisioners for other image types (ec2, vmware etc) in here. If you want to use another base operating system you define that in here and provide an url and hash sum to the base image.

preseed.cfg will be fetched by the Ubuntu installer from a local web server that Packer has spun-up that will automate the Ubuntu installation by automatically providing answers to all of the installation prompts.

scripts folder contains scripts that makes little sense to perform with ansible. Eg: installs ansible and does final cleanup before exporting the box.

playbook.yml is the ansible playbook where you define packages to be installed and other configurations.

To customise the devbox to your needs you will mainly be interested in devbox.json and playbook.yml.

Now we can go ahead and build the devbox with packer.

$ cd packer
$ packer build devbox.json

To see the installation progress you can either go from the VirtualBox UI, watch the preview screen, select Show from Machine menu, or set headless to false in the devbox.json file.

Dynamic dependencies

As mentioned earlier your team might have tooling or configuration that is frequently updated which you want to propagate throughout your team more often than you want to build a new box with Packer.

One example could be a company wide ssh config or a common zshrc file. The boilerplate contains a simple example on how this is done.

Lets have a look inside of the Vagrantfile.

$ cd ..
$ cat Vagrantfile

Check out the lines between # PROVISION START and # PROVISION END

The first three lines copies your host machines default ssh keys into the devbox so that you can access your remote machines from the devbox as you would from your host machine. We also copy your git config so that you can make git commits from within the devbox.

After that you can see that we are calling ansible to do the rest of the provisioning using the ansible/playbook.yml file.

  - hosts: all
    - name: Copy zshrc
      copy: src=files/zshrc dest=/home/vagrant/.zshrc
    - name: Set shell to zsh
      become: yes
      user: name=vagrant shell=/bin/zsh

Currently all it does is setting the default shell to zsh and copies a zshrc file into the vagrant home folder but it serves as a template for you to add all of the other tools and configurations that go into the devbox.

For example you can add a company wide ssh config that is pushed to git and all your team mates have to do to get the new config is a git pull followed by a vagrant provision.

Once you notice a dynamic dependency is being updated less frequently you can move it to the static dependencies instead (A mere copy paste between two ansible files).

Now lets add the box to vagrant, provision it and start it up!

$ cd ..
$ vagrant box add devbox packer/builds/
$ vagrant up

If everything went well you should be greeted with a shell looking like this.

Safeguarding your deployments with Packer

For me, one of the greatest challenges of building our solution was making sure we had the ability to deploy on-premise, or on any cloud provider.

At the root of all the tools we use to make this possible is Packer.

“Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.” - Hashicorp

By using Packer we know can pack all of our applications and their dependencies into a deployable image, through a single configuration, that can be easily installed on our cloud clusters or on-premise bare metal clusters,

Traditionally when deploying a cluster of machines you often do the provisioning through a configuration management tool like Ansible, Puppet or Chef.

But whether you are provisioning thousands of servers or only a dozen, not only will it take a considerable amount of time but also every step along the way things can fail and very often does, even with idempotent provisioning scripts.

That’s because even if it ran correctly last time, maybe links on the internet have changed or an external software package was updated during provisioning and it no longer works. You end up trusting a lot of the internet to be stable which just does not happen in reality.

How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript

By using Packer to pack your OS and dependencies into one image you have defended against the instability of the outside world without sacrificing reproducibility. Throw packer into your CI/CD pipeline and you can achieve an immutable server configuration and not have to worry about any of your cluster nodes ending up in an inconsistent state. When one gets ill you don’t nurse it, you throw it away and get a new one aligning to the Pets vs Cattle analogy. Happy sysadmin Happy sysadmin

We have seen in theory how packer can be applied to your production servers, but can the same concept be applied to your development environment?

The short answer is yes, so stay tuned (feel free to sign up for our mailing list), because in the next article we will get you familiar with Packer while setting up a “devbox” for you and your team. It’s been a great time saver for me, and I hope it will help you too.

Check out a follow-up post on how to build a devbox with Packer, Vagrant and Ansible.