A (More) Secure Workstation

In a world where CI pipelines are (mostly) run in isolated disposable VMs, where every new workload is containerized and where supply chain attacks are increasingly frequent, I find mind-boggling that a lot of dev/ops workstations are still a nightmarish melting pot of brew-, npm- and pip-installed tools. Those come with tens if not hundreds of dependencies (which invariably cause headaches a few months down the line), and this whole zoo runs in your $HOME with your own privileges. What’s to prevent any of these to go rogue and install a backdoor somewhere in your .zshrc or anywhere else?

Two years ago, when I bought my current laptop and started fresh, I decided to try to keep it as clean as possible. The more minimalist the setup, the longer I would go without dealing with dependency hell, python version discrepancies and, hopefully, security issues. It’s a Mac, so the ideas here are geared towards macOS, but they should work on any UNIX-like system.

I’m no security expert, so I won’t pretend I have all the answers: this piece is more of an RFC than a how-to, and I’d love to hear your thoughts and tips on the matter.

The idea

  • Install as few third-party apps as possible
  • No pip, no npm, no brew, and certainly no curl | sh
  • Every tool gets its own Docker image, and runs in an ad-hoc disposable container
  • Dev projects also run inside containers

I find containers ideal to deal with dev tools, as they’re mostly isolated from the host by default and enable the user to define explicit interfaces: share specific environment variables, precise bind mounts (possibly read-only), port mappings, etc. They’re also disposable, so you can easily make them single-use to prevent any leak between projects or environments and avoid long-term compromission.

How it works

An example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
function aws () {
    docker run --rm -it \
    -v $PWD:$PWD -w $PWD \
    -v "$HOME/.aws/config:/root/.aws/config:ro" \
    -e AWS_ACCESS_KEY_ID \
    -e AWS_SECRET_ACCESS_KEY \
    -e AWS_SESSION_TOKEN \
    -e AWS_PAGER \
    --entrypoint "/usr/local/bin/aws" \
    amazon/aws-cli:2.13.13 $@
}

This function lives in my ~/.zshenv. It’s a small wrapper around docker run, which runs the AWS CLI from the official image and mimicks the host environment, except it’s scoped to the current directory (useful for s3 cp) and has read-only access to my ~/.aws/config file. The credentials come from craws, another small tool I wrote that gets vended temporary ones from AWS IAM Identity Center (formerly SSO) and exports them in the current terminal session.

Every tool I use (ansible, terraform, kubectl…) has its own function, sometimes with a custom image, and always with as little privileges as possible. For those I use less frequently, there’s always docker run --rm -it -v $PWD:$PWD -w $PWD ubuntu bash to get a shell in a clean temporary Ubuntu container, apt install what I need and run it.

The UNIX philosophy of composability still applies as stdout and stderr are forwarded: I can pipe one tool’s output to another, and to that end, jq also has its own function:

1
2
3
function jq () {
    docker run --rm -it --pull never localhost:5000/sw/jq $@
}

and Dockerfile:

1
2
3
FROM alpine:3.15
RUN apk add --no-cache jq
ENTRYPOINT ["/usr/bin/jq"]

The image’s tag is prefixed with localhost:5000, a registry which doesn’t exist, to ensure I never pull it from the Docker Hub and only use the locally built one.

Things sometimes get funky, for example with port mappings, as tools should bind to 0.0.0.0 for Docker to be able to forward them traffic. Example with kubectl:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
function kubectl () {
    # If command is 'kubectl proxy', bind to 0.0.0.0 and forward port 8001
    args=($@)
    docker_args=()
    if [ ! -z "$1" ] && [ "$1" = "proxy" ]; then
        args+=("--address=0.0.0.0")
        docker_args+=("-p" "127.0.0.1:8001:8001")
    fi

    docker run --rm -it \
    -v $HOME/.kube:/root/.kube \
    -v $PWD:$PWD -w $PWD \
    ${docker_args[@]} \
    localhost:5000/sw/k8s-tools:latest kubectl $args
}

What's next

While this has been working pretty well for me so far, it’s highly custom to fit my own workflow and not very portable. While writing this post, I thought that maybe this could become a project in its own right, with a central repository of “recipes” (Dockerfiles or images + wrapper code) and some form of CLI to manage them.

So I’ve streamlined what I had a bit, and here it is: Toolship is now on GitHub!