Skip to content

Docker Networking

  • Containers are not exposed to the outside world by default. Each container gets its own network namespace with a private IP address that is only reachable within Docker’s virtual network.
  • To expose a container to the host or internet, you must explicitly publish ports: docker run -p 8080:80 nginx

Docker ships with several built-in network drivers:

DriverUse Case
bridgeDefault. Containers on the same host communicate via a virtual bridge. Isolated from host network.
hostContainer shares the host’s network stack directly. No port mapping needed. Loses network isolation.
noneDisables all networking. Container has only a loopback interface.
overlayMulti-host networking for Docker Swarm. Creates a virtual network spanning multiple Docker hosts.
macvlanAssigns a real MAC address to the container. Container appears as a physical device on the LAN.
ipvlanLike macvlan but shares the host’s MAC. Useful where MAC proliferation is a problem.
Terminal window
# Containers on the default bridge can reach each other by IP, NOT by name
docker run -d --name app1 nginx
docker run -d --name app2 nginx
# User-defined bridge networks support DNS resolution by name
docker network create my-net
docker run -d --name app1 --network my-net nginx
docker run -d --name app2 --network my-net nginx
# Now app2 can ping app1 by name: ping app1
  • Always use user-defined bridge networks in production. The default bridge lacks DNS name resolution between containers.
Terminal window
# Container uses host networking - no port mapping
docker run -d --network host nginx
# nginx is now accessible on host port 80 directly
  • Useful for performance-sensitive applications or when the container needs to manage host interfaces.
  • Not available on Docker Desktop (macOS/Windows) - the VM boundary prevents direct host network access.

Under the hood, Docker uses three Linux primitives:

  • Each container gets its own network namespace with its own interfaces, IP address, routing table, and iptables rules.
  • The namespace is created by the Docker daemon when the container starts.
  • Docker creates a veth pair - two virtual interfaces linked together at the kernel level.

  • One end goes inside the container namespace (appears as eth0 inside the container).

  • The other end attaches to a virtual bridge on the host (docker0 for the default bridge).

  • Traffic flows: container → veth → bridge → host/internet.

    Terminal window
    # See the docker0 bridge and its connected veth interfaces
    ip addr show docker0
    ip link show | grep veth
  • Docker manages iptables rules on the host to handle port publishing, NAT, and inter-container firewall policies.
  • docker run -p 8080:80 adds an iptables DNAT rule: traffic arriving on host port 8080 gets forwarded to the container’s port 80.
  • nftables gotcha (Debian 12+, RHEL 9+): These distros default to nftables. Docker still writes iptables rules (via iptables-nft compatibility layer). If you mix raw nft rules with Docker, verify rules aren’t silently dropped. Check with iptables -L -n and nft list ruleset to see both views.
Terminal window
# List networks
docker network ls
# Create a custom bridge network
docker network create my-net
# Create with a specific subnet
docker network create --driver bridge --subnet 192.168.50.0/24 my-net
# Connect a running container to a network
docker network connect my-net my-container
# Disconnect a container from a network
docker network disconnect my-net my-container
# Inspect network details (connected containers, IPs, etc.)
docker network inspect my-net
# Remove unused networks
docker network prune
Terminal window
# Publish a single port: host:container
docker run -p 8080:80 nginx
# Publish on a specific host interface
docker run -p 127.0.0.1:8080:80 nginx # Only accessible from localhost
# Publish all EXPOSE'd ports to random host ports
docker run -P nginx
# View published ports for a container
docker port my-nginx
Terminal window
# Two containers sharing the same network namespace (used in service mesh sidecars)
docker run -d --name app my-app
docker run -d --network container:app \
--name sidecar envoy:latest
# "sidecar" sees the same network interfaces, IP, and ports as "app"

This is how Envoy/Istio sidecars intercept traffic without modifying the application container.

  • User-defined networks include a built-in DNS server (127.0.0.11) that resolves container names and service names to their current IP.
  • In Docker Swarm, the built-in DNS also provides VIP-based load balancing - a single service name resolves to a virtual IP, and Swarm distributes connections across all healthy replicas.