Docker the mental model
I've been using Docker for a while now, and every day I discover something new that shifts my thinking and gives me better clarity on how it all fits together.
Before diving deep, here's what you need to keep in mind as a foundation:
Docker has two distinct phases, and mixing them up causes a lot of confusion early on.
This is when you run docker build. Docker reads your Dockerfile top to bottom, executes each instruction, and creates a snapshot (layer) after each one. If you rebuild and nothing changed on a particular line, Docker skips it and uses the cached layer — this is what makes rebuilds fast.
Commands you'll use at build time:
| Command | What it does |
|---|---|
FROM | Sets the base image |
RUN | Executes a shell command (installs packages, compiles code) |
COPY / ADD | Copies files into the image |
ARG | Defines a variable available only during build |
ENV | Sets environment variables (also available at runtime) |
EXPOSE | Documents which port the app uses (informational only) |
USER | Sets which user runs the process |
Key distinction:
ARGdisappears after the build.ENVsticks around and is available when the container runs.
This is when you run docker run. Docker takes your image (the blueprint) and spins up a live container from it — adding a writable layer on top of all the read-only image layers. This is where your application actually runs.
Commands that execute at runtime:
| Command | What it does |
|---|---|
CMD | Default command when the container starts |
ENTRYPOINT | The main process of the container |
VOLUME | Declares a mount point for persistent data |
HEALTHCHECK | Tells Docker how to test if the container is healthy |
You can also interact with a running container using:
docker exec — run a command inside the containerdocker attach — attach to the container's main processThis is the part most people skip, but it's what separates someone who uses Docker from someone who understands Docker.
Docker's power comes from three Linux kernel features working together. Individually, each one solves a piece of the puzzle. Together, they give you a container.
If you've ever done an Arch Linux install, you've used chroot. It changes the root directory (/) for a process, so that process thinks a specific folder is the entire filesystem. It can't navigate above it.
chroot /myapp /bin/bash
# This process now sees /myapp as /
# It cannot access anything above /myapp
This gives us filesystem isolation — the process can't touch files it shouldn't. But that's all it does. The process can still see every other process on the host, share the network, and consume as much memory as it wants. The walls exist on the sides, but the ceiling and floor are wide open.
Namespaces close the rest of the gaps. Each container gets its own set of namespaces, which means each container lives in its own little reality:
With namespaces in place, containers are genuinely unaware of each other. They're isolated environments, not just isolated folders.
Control groups (cgroups) are the final piece. They don't deal with isolation — they deal with limits. cgroups let the kernel enforce how much of the host's resources any given container can consume:
Without cgroups, a single runaway container could starve the entire host. With cgroups, you can say "this container gets 512MB of RAM and 0.5 CPU cores, and not a byte more."
┌─────────────────────────────────────┐
│ Container │
│ │
│ chroot → isolated filesystem │
│ namespaces → isolated environment │
│ cgroups → isolated resources │
└─────────────────────────────────────┘
Think of it like building a room:
Docker's job is to wire all three together automatically every time you run docker run. You never call these kernel primitives directly — Docker handles all of it under the hood.
This is also why containers are fundamentally different from VMs. A VM virtualizes the hardware and runs a completely separate OS kernel. A container shares the host kernel but uses these three primitives to simulate isolation. That's why containers start in milliseconds and VMs take minutes.