My Developer Environment Blueprint

I treat my local environment as a product. If it’s brittle, slow, or full of friction points, I can’t build complex systems with any real focus.

Post this

I treat my local environment as a product. If it's brittle, slow, or full of little friction points, I can't build complex systems with any real focus. Over the years I've moved away from endlessly tweaking my setup — there was a period where I spent more time configuring Emacs than writing code in it — toward building a stable, portable platform that just works when I sit down to do serious work: distributed tracing, AI/ML model probing, infrastructure prototyping.

The core philosophy

Three things I've landed on after enough rebuilds:

Portability over customization. I should be able to provision a new machine in about 15 minutes — not because I switch machines constantly, but because "it works on my specific machine with its specific history" is a bad foundation. I've had to set up fresh environments in tight timeframes before and the ones where it took hours were painful in ways that were entirely avoidable.

Text over GUIs wherever it makes sense. If I can do it in a terminal, I can script it. If I can script it, I can automate it. This isn't a philosophy about purity — I use plenty of GUI tools — but for anything I do repeatedly, a CLI is just more composable.

Local should mirror production constraints. My local stack runs in containers, with the same resource limits and network topology I'd expect in production. The "it works on my machine" problem is real and I've been on the receiving end of it enough times that I just removed the excuse entirely.

The actual setup

The source of truth is my dotfiles repository. It's not just config files — it's an idempotent installation script that I can run on a fresh machine and walk away from.

Shell and multiplexer

I live in zsh + tmux. The combination lets me persist sessions across reboots and network disconnects, which matters when you're connected to a remote dev environment over an occasionally flaky connection. I have autojump set up to learn my most-visited directories — I type j backend and I'm there, no path memorization required. Tmux sessions are organized by context: client work, open source contributions, infrastructure. I never mix them. Context-switching between projects is disorienting enough without your terminal also being a mess.

Aliases for high-frequency commands: k for kubectl, g for git, a handful of others. These save a few seconds each time. Over a year that actually adds up to something meaningful, though honestly I mostly keep them because typing kubectl forty times a day feels unnecessary.

The editor

I moved from Emacs to VS Code a few years ago, primarily for the ecosystem. I still miss a few things about Emacs — the composability, some of the navigation shortcuts — but I've made peace with the trade-off. I bring a keyboard-centric workflow with me: minimal mouse use, keyboard shortcuts for everything I do regularly.

I enforce .editorconfig in every project root. Tabs vs. spaces is solved by a config file, not a conversation.

The thing I find most valuable: VS Code Remote Containers. I develop inside Docker containers that mirror production images. This means I never hit "it only happens in CI" issues because my local environment is the same environment CI is using. It adds a small overhead to startup, but it's eliminated an entire class of debugging sessions that I don't miss at all.

Infrastructure and prototyping

Building distributed systems properly requires being able to simulate a cluster locally, which sounds more exotic than it is.

Docker and Kind for lightweight local Kubernetes clusters. I test Helm charts and operators locally before they touch a shared environment. This has caught misconfigured resource limits and malformed RBAC policies before they became other people's problems.

LocalStack for mocking AWS services — S3, SQS, Lambda — without accumulating a cloud bill during development. This one I couldn't imagine working without now. Prototyping against real AWS during early development is expensive and slow; LocalStack is neither.

A local Jaeger and Prometheus stack for observability. I refuse to debug distributed transactions by grepping through logs. I want to see the trace, the spans, the timing. Setting this up locally was a small investment that has paid back many times over in debugging sessions that would otherwise have taken much longer.

AI/ML workbench

When I'm probing LLMs or training small models, I need a Python environment that doesn't interact badly with my system tools. pyenv + poetry for this — every project gets an isolated virtualenv, no global pip installs, no dependency conflicts between projects. Jupyter Lab runs inside a container with GPU passthrough. This keeps CUDA drivers isolated from host OS updates, which has saved me from at least one scenario where an OS update would have broken a training environment mid-experiment.

Setting up in 15 minutes

When I get a new machine, I run one command:

git clone https://github.com/pranavcode/dotfiles.git && ./install.sh

This bootstraps Homebrew, installs nvm, pyenv, tmux, and symlinks all my configs. By the time I've made coffee, I'm ready to work. The script is idempotent, so running it again on an existing machine just updates things to the current state. I run it occasionally even on machines I've had for a while, partly as a sync mechanism and partly as a sanity check that the script still works.

References

My setup borrows heavily from people who've thought about this more carefully than I have:

← Back to Home