Docker Networking
- Containers are not exposed to the outside world by default. Each container gets its own network namespace with a private IP address that is only reachable within Docker’s virtual network.
- To expose a container to the host or internet, you must explicitly publish ports:
docker run -p 8080:80 nginx
Network Drivers
Section titled “Network Drivers”Docker ships with several built-in network drivers:
| Driver | Use Case |
|---|---|
| bridge | Default. Containers on the same host communicate via a virtual bridge. Isolated from host network. |
| host | Container shares the host’s network stack directly. No port mapping needed. Loses network isolation. |
| none | Disables all networking. Container has only a loopback interface. |
| overlay | Multi-host networking for Docker Swarm. Creates a virtual network spanning multiple Docker hosts. |
| macvlan | Assigns a real MAC address to the container. Container appears as a physical device on the LAN. |
| ipvlan | Like macvlan but shares the host’s MAC. Useful where MAC proliferation is a problem. |
Bridge Network (default)
Section titled “Bridge Network (default)”# Containers on the default bridge can reach each other by IP, NOT by namedocker run -d --name app1 nginxdocker run -d --name app2 nginx
# User-defined bridge networks support DNS resolution by namedocker network create my-netdocker run -d --name app1 --network my-net nginxdocker run -d --name app2 --network my-net nginx# Now app2 can ping app1 by name: ping app1- Always use user-defined bridge networks in production. The default bridge lacks DNS name resolution between containers.
Host Network
Section titled “Host Network”# Container uses host networking - no port mappingdocker run -d --network host nginx# nginx is now accessible on host port 80 directly- Useful for performance-sensitive applications or when the container needs to manage host interfaces.
- Not available on Docker Desktop (macOS/Windows) - the VM boundary prevents direct host network access.
How Docker Implements Networking
Section titled “How Docker Implements Networking”Under the hood, Docker uses three Linux primitives:
Network Namespaces
Section titled “Network Namespaces”- Each container gets its own network namespace with its own interfaces, IP address, routing table, and iptables rules.
- The namespace is created by the Docker daemon when the container starts.
veth Pairs (Virtual Ethernet)
Section titled “veth Pairs (Virtual Ethernet)”-
Docker creates a veth pair - two virtual interfaces linked together at the kernel level.
-
One end goes inside the container namespace (appears as
eth0inside the container). -
The other end attaches to a virtual bridge on the host (
docker0for the default bridge). -
Traffic flows: container → veth → bridge → host/internet.
Terminal window # See the docker0 bridge and its connected veth interfacesip addr show docker0ip link show | grep veth
iptables Rules
Section titled “iptables Rules”- Docker manages iptables rules on the host to handle port publishing, NAT, and inter-container firewall policies.
docker run -p 8080:80adds an iptables DNAT rule: traffic arriving on host port 8080 gets forwarded to the container’s port 80.- nftables gotcha (Debian 12+, RHEL 9+): These distros default to nftables. Docker still writes iptables rules (via
iptables-nftcompatibility layer). If you mix rawnftrules with Docker, verify rules aren’t silently dropped. Check withiptables -L -nandnft list rulesetto see both views.
Network Commands
Section titled “Network Commands”# List networksdocker network ls
# Create a custom bridge networkdocker network create my-net
# Create with a specific subnetdocker network create --driver bridge --subnet 192.168.50.0/24 my-net
# Connect a running container to a networkdocker network connect my-net my-container
# Disconnect a container from a networkdocker network disconnect my-net my-container
# Inspect network details (connected containers, IPs, etc.)docker network inspect my-net
# Remove unused networksdocker network prunePort Publishing
Section titled “Port Publishing”# Publish a single port: host:containerdocker run -p 8080:80 nginx
# Publish on a specific host interfacedocker run -p 127.0.0.1:8080:80 nginx # Only accessible from localhost
# Publish all EXPOSE'd ports to random host portsdocker run -P nginx
# View published ports for a containerdocker port my-nginxSidecar Network Pattern
Section titled “Sidecar Network Pattern”# Two containers sharing the same network namespace (used in service mesh sidecars)docker run -d --name app my-appdocker run -d --network container:app \ --name sidecar envoy:latest# "sidecar" sees the same network interfaces, IP, and ports as "app"This is how Envoy/Istio sidecars intercept traffic without modifying the application container.
Built-in DNS and Load Balancing
Section titled “Built-in DNS and Load Balancing”- User-defined networks include a built-in DNS server (127.0.0.11) that resolves container names and service names to their current IP.
- In Docker Swarm, the built-in DNS also provides VIP-based load balancing - a single service name resolves to a virtual IP, and Swarm distributes connections across all healthy replicas.