dmai/blog

dmai/blog

Overview
1. Docker to Kubernetes2. Architecture & Control Plane
Back to Articles
Kubernetes

1. Docker to Kubernetes

Mar 26, 2026·15 min read
CONTAINER vs VMVirtual MachineGuest OSFull Linux kernelApp+ libsApp+ libsHeavy, slow to startContainersShared host kernelAPIWorkerLight, instant startDOCKER WORKFLOWDockerfileFROM node:20buildImagemyapp:v1runContainerRunning processSame image → same behavior everywhereDocker runs ONE container wellBut how do you run a distributed system?

Containers: A Quick Refresher

A container is a lightweight, isolated process that bundles your application code with everything it needs to run: runtime, libraries, system tools, and config. Unlike a virtual machine, it doesn't carry an entire operating system. It shares the host kernel and uses Linux primitives like namespaces and cgroups for isolation.

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

This Dockerfile produces an image. That image runs identically on your laptop, in staging, and in production. That's the core promise of containers: consistency across environments.

Docker made this workflow accessible. Build an image, push it to a registry, pull it anywhere, run it. No more "works on my machine." No more dependency hell. That alone was revolutionary.

But containers are a packaging and runtime tool. They answer the question: "How do I run this one process reliably?" They don't answer: "How do I run a distributed system?"

The Problem: Containers at Scale

Let's say you're running a web application with three services: an API, a worker that processes background jobs, and a frontend. Each is containerized. In development, you run them with docker compose up and everything's fine.

Now you need to go to production. And production has questions that Docker can't answer on its own:

Scheduling: You have 12 servers. Where does each container go? Which machines have enough CPU and memory? Doing this manually with docker run on individual machines doesn't scale.

Networking: Your API container on Server 3 needs to talk to your worker container on Server 7. How do they find each other? IP addresses change when containers restart. You need service discovery.

Failure Recovery: Containers crash. Servers go down. A memory leak kills your API at 3 AM. With Docker alone, a crashed container stays crashed until someone SSHs in and runs docker start again.

Scaling: Traffic spikes at 9 AM when users log in. You need 20 API containers, not 3. At 2 AM, you're wasting money running 20. You need auto-scaling.

Deployments: You have v2 ready. Stopping all v1 containers and starting v2 means downtime. You need rolling updates and fast rollback.

Configuration and Secrets: Your API needs a database password. How do you inject it at runtime across 40 containers on 12 servers? And how do you rotate it without redeploying everything?

The Restaurant Analogy

Think of Docker as a really good recipe. It guarantees that your dish comes out the same every time, no matter which kitchen you're in. Consistent ingredients, consistent steps, consistent result.

But a recipe doesn't run a restaurant.

A restaurant needs a head chef (scheduler) deciding which cook handles which order. It needs a floor plan (networking) so waiters know which table to deliver to. It needs a manager (controller) who notices when a cook calls in sick and reassigns their station. It needs a system for scaling up during the dinner rush and scaling down on a Tuesday afternoon.

Kubernetes is the restaurant management system. Docker is the recipe. You need both, but they solve fundamentally different problems.

How Kubernetes Solves Each Problem

Kubernetes (K8s) is a container orchestration platform. You describe what you want — "run 5 copies of my API, each with 512MB of memory" — and Kubernetes figures out how to make it happen and keep it that way.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 5
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: myregistry/api:v2
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "1Gi"
              cpu: "500m"

The key concept is declarative configuration. You don't tell Kubernetes how to do something. You tell it what you want, and it continuously works to make reality match your declaration.

→A Deployment declares how many replicas and what image to run
→A Service provides stable DNS and load balancing across pods
→An HPA watches metrics and adjusts replica count automatically

Declarative: Say What, Not How

This is the most important mental shift. With scripts, you write how to get to the desired state. With Kubernetes, you write what the desired state is, and K8s figures out how to get there.

# Deploy v2 to 12 servers manually
for server in server{1..12}; do
  ssh $server "docker pull myapp:v2"
  ssh $server "docker stop myapp"
  ssh $server "docker run -d myapp:v2"
done
# What if server5 is down?
# What if the pull fails on server8?
# What if v2 crashes on startup?

Kubernetes runs a continuous reconciliation loop: it compares the desired state (your YAML) with the actual state (what's running), and takes action to close the gap. If a pod crashes, K8s starts a new one. If a node goes down, K8s reschedules its pods elsewhere.

You don't write scripts. You don't SSH into servers. You declare intent and let the system reconcile.

What You Don't Need Kubernetes For

Let's be honest: Kubernetes is complex. If you're running a single service on a single server, it's overkill. Docker Compose or a managed service like Cloud Run, ECS, or Fly.io will serve you better with a fraction of the operational burden.

Kubernetes earns its complexity when you have:

✓Multiple services that need to discover and communicate with each other
✓Workloads that need to scale independently based on different metrics
✓Strict requirements for zero-downtime deployments and automatic rollback
✓Teams that need self-service deployment without infrastructure tickets

The threshold isn't a specific number of containers. It's when the operational problems of running containers start consuming more engineering time than the application problems you're trying to solve.

What We're Building in This Series

Over the next 9 parts, we'll build a production-grade Kubernetes setup from scratch. Here's the roadmap:

1.From Docker to Kubernetes — you are here ←
2.Kubernetes Architecture — control plane, nodes
3.Pods, Deployments, and ReplicaSets — core primitives
4.Services and Networking — discovery, DNS
5.ConfigMaps, Secrets, and Environment Management — config
6.Storage and Persistence — volumes, PVCs
7.Ingress and Traffic Management — external access
8.Health Checks, Resource Limits, and PDBs — hardening
9.Helm and Kustomize — templating
10.CI/CD with Kubernetes — GitOps

Each part will include real manifests you can apply to a local cluster (we'll use kind or minikube), real debugging scenarios, and the gotchas that documentation glosses over.

By the end, you'll have a working mental model of Kubernetes and the confidence to deploy, debug, and operate containerized applications in production. Not just follow tutorials — actually understand what's happening and why.

Let's get started. Part 2 drops next week.

© 2026 dmai/blog Engineer Notes. All rights reserved.