🐳 Docker Up and Running
Docker helps developers build, share, run, and verify applications anywhere — without tedious environment configuration or management.
Introduction
When you install Docker, you get two major components:
- The Docker client
- The Docker engine (sometimes called the "Docker daemon")
The Docker engine is the core software that runs and manages containers. It implements the runtime, API, and everything else required to run containers. Once installed, you can use the docker version
command to test that the client and daemon (server) are running and talking to each other.
docker version
# Client:
# ...
# Server:
# ...
If you get a response back from both the Client and Server, you’re good to go. If not, check that the Docker daemon is running (sudo systemctl status docker
on Linux).
Images
Image, Docker image, container image, and OCI (Open Container Initiative) image all mean the same thing. A container image is a read-only package that contains everything you need to run an application: application code, dependencies, a minimal OS, and metadata. A single image can be used to start one or more containers.
You get container images by pulling them from a registry. The most common registry is Docker Hub, but others exist (e.g., GitHub Container Registry, Google Artifact Registry, private registries). The pull operation downloads an image to your local Docker host where Docker can use it to start containers.
Pulling Images
docker images # List all local images; add --digests to see SHA256 digests.
docker pull <repository>:<tag> # Pull an image from a registry
docker pull mcr.microsoft.com/powershell:latest # Pull from a Microsoft registry
Note: tag
is optional and defaults to latest
. The latest
tag does not guarantee it is the most recent image in the repository.
Filtering and Formatting
docker images --filter dangling=true
docker images --filter=reference="*:latest"
docker images --format "{{.Repository}}: {{.Tag}}: {{.Size}}"
Dangling images are untagged images (<none>:<none>
) that are safe to remove.
Deleting Images
docker rmi alpine:latest
Deleting an image removes it and its layers from your Docker host, unless those layers are shared by other images.
Images and Layers
A Docker image is a stack of read-only layers. Each layer represents a set of file changes or additions. All images start with a base layer (e.g., Ubuntu, Alpine). Adding dependencies or source code creates new layers. Layers are cached, so rebuilding similar images is fast.
Inspect image details:
docker inspect ubuntu:latest
docker history ubuntu:latest
Containers
A container is a runtime instance of an image. It includes the image plus a thin writable layer and its own process tree, networking, and mounts.
Running Containers
docker run -it ubuntu:latest /bin/bash
-it
: Interactive terminal-d
: Detached mode (run in background)-p
: Port mapping (host:container)--name
: Name your container
Common Commands
docker ps
: List running containers (-a
for all)docker exec -it <container> bash
: Run a shell in a running containerdocker stop <container>
: Stop a running containerdocker start <container>
: Start a stopped containerdocker rm <container>
: Remove a stopped containerdocker inspect <container>
: Show detailed info
Containerizing an App
The typical workflow:
- Write your application code and dependencies.
- Create a Dockerfile describing how to build and run your app.
- Build the image:
docker build -t myapp:latest .
- (Optional) Push the image to a registry.
- Run a container from the image.
Example: Node.js App
node_modules
.git
.gitignore
*.md
dist
FROM alpine
LABEL maintainer="foo@bar.com"
RUN apk add --update nodejs npm
WORKDIR /app
COPY package.json package-lock.json .
RUN npm install
COPY . .
ENV PORT 8080
EXPOSE 8080
ENTRYPOINT ["node", "./index.js"]
Multi-stage Builds for Production
Multi-stage builds reduce image size by separating build and runtime dependencies.
FROM node:20-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
COPY . /app
WORKDIR /app
FROM base AS prod-deps
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile
FROM base AS build
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
RUN pnpm run build
FROM base
COPY --from=prod-deps /app/node_modules /app/node_modules
COPY --from=build /app/dist /app/dist
EXPOSE 8000
CMD [ "pnpm", "start" ]
Build with:
docker build -t multi:stage .
Volumes and Persistent Data
Containers are ephemeral by default. To persist data, use volumes.
- Volumes are managed by Docker and exist outside the container lifecycle.
- Volumes can be shared between containers and mapped to external storage.
- Data in a volume persists even if the container is deleted.
Creating and Using Volumes
docker volume create myvol
docker run -it --name voltainer --mount source=myvol,target=/vol alpine
- If you specify an existing volume, Docker will use the existing volume
- If you specify a volume that doesn’t exist, Docker will create it for you
Volume Commands
docker volume ls
: List volumesdocker volume inspect <volume>
: Inspect a volumedocker volume prune
: Remove unused volumes (dangerous)docker volume rm <volume>
: Remove a specific volume
Deploying Apps with Compose
Docker Compose lets you define and run multi-container applications using YAML.
services:
web:
build: .
command: python app.py
ports:
- target: 8080
published: 5000
networks:
- counter-net
volumes:
- type: volume
source: counter-vol
target: /app
redis:
image: 'redis:alpine'
networks:
- counter-net
networks:
counter-net:
volumes:
counter-vol:
build
: Build from Dockerfile in current directorycommand
: Command to runports
: Map container to host portsnetworks
: Attach to a networkvolumes
: Mount a volume
Start the app:
docker compose up -d
Compose Commands
docker compose stop
: Stop all containersdocker compose rm
: Remove stopped containers and networksdocker compose restart
: Restart containersdocker compose ps
: List containers and their statusdocker compose down
: Stop and remove containers and networks
Best Practices & Tips
- Use
.dockerignore
to avoid copying unnecessary files into images. - Keep images small by using multi-stage builds and minimal base images (e.g., Alpine).
- Tag images with meaningful versions, not just
latest
. - Use environment variables for configuration (
ENV
in Dockerfile,-e
indocker run
). - Regularly prune unused images, containers, and volumes to save disk space.
- Use health checks (
HEALTHCHECK
in Dockerfile) for production containers. - Monitor resource usage with
docker stats
.
Summary
Docker revolutionized application development by making it easy to build, ship, and run applications anywhere. Containers are lightweight, portable, and efficient compared to traditional virtual machines. By mastering images, containers, volumes, and Compose, you can streamline development, testing, and deployment workflows.
For more details, see the official Docker documentation.