In a world where CI pipelines are (mostly) run in isolated disposable VMs, where every new workload is containerized and where supply chain attacks are increasingly frequent, I find mind-boggling that a lot of dev/ops workstations are still a nightmarish melting pot of brew
-, npm
- and pip
-installed tools. Those come with tens if not hundreds of dependencies (which invariably cause headaches a few months down the line), and this whole zoo runs in your $HOME
with your own privileges. What’s to prevent any of these to go rogue and install a backdoor somewhere in your .zshrc
or anywhere else?
Two years ago, when I bought my current laptop and started fresh, I decided to try to keep it as clean as possible. The more minimalist the setup, the longer I would go without dealing with dependency hell, python version discrepancies and, hopefully, security issues. It’s a Mac, so the ideas here are geared towards macOS, but they should work on any UNIX-like system.
I’m no security expert, so I won’t pretend I have all the answers: this piece is more of an RFC than a how-to, and I’d love to hear your thoughts and tips on the matter.
The idea
- Install as few third-party apps as possible
- No
pip
, nonpm
, nobrew
, and certainly nocurl | sh
- Every tool gets its own Docker image, and runs in an ad-hoc disposable container
- Dev projects also run inside containers
I find containers ideal to deal with dev tools, as they’re mostly isolated from the host by default and enable the user to define explicit interfaces: share specific environment variables, precise bind mounts (possibly read-only), port mappings, etc. They’re also disposable, so you can easily make them single-use to prevent any leak between projects or environments and avoid long-term compromission.
How it works
An example:
|
|
This function lives in my ~/.zshenv
. It’s a small wrapper around docker run
, which runs the AWS CLI from the official image and mimicks the host environment, except it’s scoped to the current directory (useful for s3 cp
) and has read-only access to my ~/.aws/config
file. The credentials come from craws
, another small tool I wrote that gets vended temporary ones from AWS IAM Identity Center (formerly SSO) and exports them in the current terminal session.
Every tool I use (ansible
, terraform
, kubectl
…) has its own function, sometimes with a custom image, and always with as little privileges as possible. For those I use less frequently, there’s always docker run --rm -it -v $PWD:$PWD -w $PWD ubuntu bash
to get a shell in a clean temporary Ubuntu container, apt install
what I need and run it.
The UNIX philosophy of composability still applies as stdout
and stderr
are forwarded: I can pipe one tool’s output to another, and to that end, jq
also has its own function:
|
|
and Dockerfile:
|
|
The image’s tag is prefixed with localhost:5000
, a registry which doesn’t exist, to ensure I never pull it from the Docker Hub and only use the locally built one.
Things sometimes get funky, for example with port mappings, as tools should bind to 0.0.0.0 for Docker to be able to forward them traffic. Example with kubectl
:
|
|
What's next
While this has been working pretty well for me so far, it’s highly custom to fit my own workflow and not very portable. While writing this post, I thought that maybe this could become a project in its own right, with a central repository of “recipes” (Dockerfiles or images + wrapper code) and some form of CLI to manage them.
So I’ve streamlined what I had a bit, and here it is: Toolship is now on GitHub!