All notes


# Docker requires a 64-bit OS and version 3.10 or higher of the Linux kernel.
uname -r

# Install on Arch.
# 1. Install the community package:
sudo pacman -S docker
# Or install the AUR package:
# yaourt -S docker-git
# 2. Start and enable service:
sudo systemctl start docker
sudo systemctl enable docker
# 3. Edit the file in /etc/systemd/network/ on your Docker host add the following block:
# [Network]
# ...
# IPForward=kernel
# Uninstall
sudo pacman -R docker
# Uninstall including dependencies
sudo pacman -Rns docker
# Delete all images, containers, and volumes:
rm -rf /var/lib/docker

# Error: Error initializing network controller...
# Solution: a reboot might be required if you have upgraded your kernel recently without rebooting and the bridge module was built for the more recent kernel.

# Error: docker info says "cannot connect to the Docker daemon"
# Solution: sudo docker info.
# "docker version" doesnot need sudo.

# Verify installation
docker version
docker run hello-world
# List running containers:
docker ps -a

Play around. docker doc.

# ubuntu is the image you run.
docker run ubuntu /bin/echo 'Hello world'
# Hello world

# -t: assigns a pseudo-tty.
# -i: interactive, by grabbing the standard in (STDIN).
docker run -ti ubuntu /bin/bash

# -d: daemon. Returns the container ID.
docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
# 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147

# List all running containers:
docker ps -a

# Look inside the container:
docker logs 1e5535
# hello world
# ...

docker inspect 1e

docker stop 1e5535

Work with Docker Hub

docker login -u me
docker login --username=me
# Token stored in ~/.docker/config.json

docker push me/myProj:haha

Normal operations

Remove all unused containers

SO: remove all docker containers.

# Show all containers.
docker ps -a
docker rm $(docker ps --no-trunc -q -f status=exited)

# Remove all containers since the container 40e0, but don't remove 40e0 itself.
docker rm $(docker ps -f since=40e028726ad4 -aq)

# Find all containers exited with status 0.
docker ps -f 'exited=0' -aq

Good examples


Docker doc: docker volumes.

########## Anonymous volume

# Create a volume inside the new container:
docker run -d -P --name containerName -v /webapp repo/someImage someCommand
# Inspect to see the low-level info:
docker inspect containerName
# "Mounts": [
#     {
#         "Name": "fac362...80535",
#         "Source": "/var/lib/docker/volumes/fac362...80535/_data",
#         "Destination": "/webapp",
#         "Driver": "local",
#         "Mode": "",
#         "RW": true,
#         "Propagation": ""
#     }
# ]

# Mount a host directory as a data volume

# Mounts the host directory /src/webapp, into the container at /webapp.
docker run -d -P --name containerName -v /src/webapp:/webapp repo/someImage someCommand
# On Windows:
docker run -v c:\path:/c:\containerPath ...

# Docker create a named volume "foo"
docker run -d -P --name containerName -v foo:/webapp repo/someImage someCommand

# Use flocker plugin in order to use shared volume like NFS

# Use volume command to create a volume first.
docker volume create -d flocker --opt o=size=20GB my-named-volume
# Then use the volume:
docker run -d -P -v my-named-volume:/webapp --name web training/webapp python

########## Data volume container
# Best for: share between containers, or want to use from non-persistent containers.

# Reuses the training/postgres image so that all containers are using layers in common, saving disk space.
docker create -v /dbdata --name dbstore training/postgres /bin/true
# use the --volumes-from flag to mount the /dbdata volume in another container.
docker run -d --volumes-from dbstore --name db1 training/postgres
docker run -d --volumes-from dbstore --name db2 training/postgres
# The same:
docker run -d --name db3 --volumes-from db1 training/postgres

# backup the contents of the dbdata volume to a backup.tar file inside our /backup directory.
docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
# Restore it to a new container:
docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

########## Other

# Mount Read-only.
docker run -d -P --name web -v /src/webapp:/webapp:ro training/webapp python

# Shared volume labels ":z" allow all containers to read/write content.
docker run -d -P --name web -v /src/webapp:/webapp:z training/webapp python
# Conversely, ":Z" is private unshared label.

# Record bash history from container to host.
docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash

# Find dangling volumes, which have no container reference:
docker volume ls -f dangling=true

# Anonymous /foo volume will be removed afterwards. But "awesome" will persist.
docker run --rm -v /foo -v awesome:/bar busybox top

you can’t mount a host directory from Dockerfile.

Handle with registry

# Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2

# Get any image from the hub and tag it to point to your registry:
docker pull ubuntu && docker tag ubuntu localhost:5000/ubuntu
# Push it to your registry:
docker push localhost:5000/ubuntu
# Pull it back from your registry:
docker pull localhost:5000/ubuntu

# To stop your registry, you would:
docker stop registry && docker rm -v registry

Use plain HTTP for private registry

docker Docs: insecure registry.

Add DOCKER_OPTS="--insecure-registry" to /etc/default/docker file or /etc/sysconfig/docker. Restart docker daemon.

Mac OS X

SO: set the insecure registry flag on mac OS.

Directly add the IP:port on: "Settings - Daemon - Basic - Insecure Registries".



# Attach to a container
docker attach nonenetcontainer
# You can detach from the container and leave it running with CTRL-p CTRL-q.


-q, --quiet                      Suppress the build output and print image ID on success

-t, --tag list                   Name and optionally a tag in the 'name:tag' format (default [])
-f, --file string                Name of the Dockerfile (Default is 'PATH/Dockerfile')
    --label list                 Set metadata for an image (default [])

    --squash                     Squash newly built layers into a single new layer

    --no-cache                   Do not use cache when building the image
    --pull                       Always attempt to pull a newer version of the image
    --force-rm                   Always remove intermediate containers
    --rm                         Remove intermediate containers after a successful build (default true)

    --compress                   Compress the build context using gzip
-c, --cpu-shares int             CPU shares (relative weight)
    --disable-content-trust      Skip image verification (default true)

    --isolation string           Container isolation technology
-m, --memory string              Memory limit
    --network string             Set the networking mode for the RUN instructions during build (default "default")

    --shm-size string            Size of /dev/shm, default value is 64MB

docker build -t me:imageName -f path/Dockerfile --force-rm=true --no-cache=true .

docker build --build-arg HTTP_PROXY= .
    Set build-time variables (--build-arg)

########## Env.

docker build -t Cron/ubuntu \
            --build-arg http_proxy="" \
            --build-arg https_proxy="" \

Slow: Sending build context to Docker daemon issue 19.


I fixed it by moving my Dockerfile and docker-compose.yml into a subfolder called docker and it worked. Apparently docker sends the current folder to the daemon and my folder was 9 gigs.

Best practise: add the following .dockerignore:

# Comment is allowed here.
# Use ! to add needed files back.
!src dockerignore file.

Cache understanding the docker cache for faster builds.

At each occurrence of a RUN command in the Dockerfile, Docker will create and commit a new layer to the image (located in /var/lib/docker).


docker commit -m "Add python env, to be done." containerName me/imageName:versionNum


SO: copying files.

# From host to container:
docker cp foo.txt mycontainer:/foo.txt
# From container to host:
docker cp mycontainer:/foo.txt foo.txt


docker run --log-opt max-size=10m --log-opt max-file=3 ...

docker logs containerName

--details	false	Show extra details provided to logs
--follow, -f	false	Follow log output
--since	 	Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)
--tail	[all]	Number of lines to show from the end of the logs
--timestamps, -t	false	Show timestamps

# Example: retrieve logs before a specific point in time
docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"
# Tue 14 Nov 2017 16:40:00 CET
docker logs -f --until=2s
# Tue 14 Nov 2017 16:40:00 CET
# Tue 14 Nov 2017 16:40:01 CET
# Tue 14 Nov 2017 16:40:02 CET


docker login

Error: unauthorized: incorrect username or password

Log into with your email and password. On the top right is docker id.


Return low-level information on Docker objects.

docker inspect imageName/volumeName


docker network ls
# bridge, none, host
# The 'bridge' network represents the docker0 network. It is the default.
# The 'none' network adds a container to a container-specific network stack.
# The 'host' network adds a container on the host’s network stack. If you run a container that runs a web server on port 80 using host networking, the web server is available on port 80 of the host machine.

User-defined networks

Docker does not support automatic service discovery on the default bridge network. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead.

docker network create --driver bridge isolated_nw
docker network inspect isolated_nw
docker network rm isolated_nw

Embedded DNS server

It is used so these containers can resolve container names to IP addresses.

When the container is created, only the embedded DNS server reachable at will be listed in the container’s resolv.conf file.


docker Docs: pull.

Docker Engine uses the :latest tag as a default.

Use the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables for proxy.

# Pull from a different registry. Use https:// by default.
# Registry credentials are managed by "docker login".
docker pull myregistry.local:5000/testing/test-image

docker pull elasticsearch:1.7.6-alpine
docker pull [OPTIONS] NAME[:TAG|@DIGEST]

By default the Docker daemon will pull three layers of an image at a time. If you are on a low bandwidth connection this may cause timeout issues and you may want to lower this via the --max-concurrent-downloads daemon option.

List all tags from dockerhub

wget -q -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'


if [ $# -lt 1 ]
cat << HELP

dockertags  --  list all tags for a Docker image on a remote registry.

    - list all tags for ubuntu:
       dockertags ubuntu

    - list all php tags containing apache:
       dockertags php apache


tags=`wget -q${image}/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'`

if [ -n "$2" ]
    tags=` echo "${tags}" | grep "$2" `

echo "${tags}"


# One-time run
docker run --rm --name cntName -P frolvlad/alpine-python3 python3 -c 'print("Hello World")'

# Run in background
docker run -dti --restart unless-stopped -P --name cntName imgName cmdStr

# Host Nginx
docker run -d -p 8080:80 -p 8081:443 -v ~/nginx/config/nginx.conf:/etc/nginx/nginx.conf -v ~/nginx/html:/usr/share/nginx/html/ -v ~/nginx/log/:/var/log/nginx/ --name nginx nginx
# Be sure to include daemon off; in your custom configuration to ensure that Nginx stays in the foreground so that Docker can track the process properly (otherwise your container will stop immediately after starting)! See
# The nginx image has: "CMD": ["nginx", "-g", "daemon off;"]. See

# Copy conf from a running container
docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf

########## Link between containers.
# wcfNote: --link is legacy. Use --net instead for dynamic link.
# Reference:

docker network create middle_earth
# 2533f0e8688170dae6a7524d517229b2944b8ae97a90f127811d4026237d9af7
docker run --name shire --detach --net middle_earth httpd:2.4
# 857986782e9cc040c6b421d703e9f1522ed767b57e11c81f8352e07036f4ea16
docker run --name mordor --detach --net middle_earth httpd:2.4
# 5f798add4b8df14c5d8ae7aeecbda85d69eb731a6bbcf01be118c5dced7f1a7c

docker exec -t mordor ping -c 2 shire
docker inspect --format '{{.Config.Hostname}}' shire
# 857986782e9c
docker exec -t mordor ping -c 2 857986782e9c

# Link with containers.
docker run -it --link redis_containerName:redis --name redisclient1 busybox

########## Expose and publish

# Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports. Exposing ports is optional.
# When a port is published, it is mapped to an available high-order port (higher than 30000) on the host machine, unless you specify the port to map to on the host machine at runtime.

# -p: publish
docker run -it -d -p 8080:80 nginx

########## Env.

docker run -e "http_proxy=" \
                -e "https_proxy=" \
                -e "POSTGRES_IP=" \
                -d Cron\ubuntu

########## Volumes.

# --volumes-from="": Mount all volumes from the given container(s)

#---------- Others.

# wcfNote: -i for input, -t for output.
docker run -a stdin -a stdout -it ubuntu /bin/bash

# -t is forbidden when the client standard output is redirected or piped, such as in:
# echo test | docker run -i busybox cat

# -p 1234-1236:1234-1236/tcp
# Format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort

docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash

-a, --attach value                Attach to STDIN, STDOUT or STDERR (default [])
-d, --detach                      Run container in background and print container ID
-i, --interactive                 Keep STDIN open even if not attached
-t, --tty                         Allocate a pseudo-TTY

-e, --env value                   Set environment variables (default [])
    --env-file value              Read in a file of environment variables (default [])

    --expose value                Expose a port or a range of ports (default [])
-p, --publish value               Publish a container's port(s) to the host (default [])
-P, --publish-all                 Publish all exposed ports to random ports

    --read-only                   Mount the container's root filesystem as read only
    --restart string              Restart policy to apply when a container exits (default "no". Other options: "always", "on-failure[:max-retries]" - Restart only if the container exits with a non-zero exit status, "unless-stopped")
    --rm                          Automatically remove the container when it exits

-l, --label value                 Set meta data on a container (default [])
    --name string                 Assign a name to the container

    --health-cmd string           Command to run to check health
    --health-interval duration    Time between running the check
    --health-retries int          Consecutive failures needed to report unhealthy
    --health-timeout duration     Maximum time to allow one check to run

Disable auto-restart

You can use the "--restart=unless-stopped" option. Or update the restart policy (this requires docker 1.11 or newer): docker update --restart=no my-container. SO: how to disable auto restart.

Run volume container

SO: how to edit code in a docker container in dev.

Structure is:
Container for app data
  docker run -d -v /data --name data
Container for app binaries
  docker run -d --volumes-from data --name app1
Container for editors and utilities for development
  docker run -d --volumes-from data --name editor

link docker links.

Each variable has a unique prefix in the form: <name>_PORT_<port>_<protocol>

List all defined environments:

docker run --rm --name web2 --link db:db training/webapp env

# DB_NAME=/web2/db
# DB_PORT=tcp://
# DB_PORT_5432_TCP=tcp://
# DB_PORT_5432_TCP_PROTO=tcp
# DB_PORT_5432_TCP_PORT=5432


Remove containers.

# -l, --link      Remove the specified link
# -v, --volumes   Remove the volumes associated with the container
# -f, --force     Remove the container by sending it SIGKILL at first.

# Remove a container and its volume.
docker rm -lv container_name

# Find dangling images which have no association with a tagged image.
docker images -f dangling=true
docker rmi $(docker images -f dangling=true -q)

docker rm $(docker ps -a -f status=exited -q)

docker run --link someothercontainer:alias --name foobar myimage
# I can remove just the link with
docker rm --link alias foobar


Stop and remove containers

SO: single command to stop and remove docker container.

docker rm -f containerName
docker stop CONTAINER_ID | xargs docker rm


Tags are just human-readable aliases for the full image name (d583c3ac45fd...).

# Rename docker images.
docker tag server:latest myname/server:latest
docker tag d583c3ac45fd myname/server:latest


docker volume ls
docker volume rm volume_name1 volume_name2

# Remove dangling volumes
docker volume rm $(docker volume ls -f dangling=true -q)

Base images


docker run -dti -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=test --name odoo_db postgres
# POSTGRES_DB. This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.

docker run -dti --name odoo --link odoo_db:db -P odoo

slim debian.

suite-slim variants: These tags are an experiment in providing a slimmer base (removing some extra files that are normally not necessary within containers, such as man pages and documentation), and are definitely subject to change.


Toolbox mirror:

docker-machine ssh default
sudo sed -i "s|EXTRA_ARGS='|EXTRA_ARGS='--registry-mirror= |g" /var/lib/boot2docker/profile
docker-machine restart default

Linux mirror:

curl -sSL | sh -s

CentOS 7:



Docker for mac vs Docker toolbox

SO. In Docker for Mac, the docker daemon is running inside an Alpine linux vm controlled by a small hypervisor (Xhyve). In Docker Toolbox, the docker daemon runs inside a boot2docker vm controlled by VirtualBox.

Here is the official claim on Docker toolbox: docker blog.

docker Doc. Toolbox includes these Docker tools:

Docker Machine for running docker-machine commands
Docker Engine for running the docker commands
Docker Compose for running the docker-compose commands
Kitematic, the Docker GUI
a shell preconfigured for a Docker command-line environment
Oracle VirtualBox

Docker run cmd always restarting

SO: run shell script on docker from shared volume.

wcfNote: a good method to check why the script causes restarting is:

1. Use plain "sh" as the CMD
2. "docker exec -ti cntName shellScriptName". Run the script and see what causes error.

Usually the error may be caused by the relative paths not recognized. Changing the relative paths in the scripts to absolute paths resolves the problem.

Error: can't find pipe on Windows/Mac

wcf: Just reboot the computer, without fastbooting.

Error: dial tcp: lookup on too many redirects

# SSH into the VM
docker-machine ssh default

# Use google dns (or openDNS at
sudo su
echo "nameserver" > /etc/resolv.conf
echo "nameserver" > /etc/resolv.conf is virtualbox dns server.

File mount does not update with changes from host

Because VirtualBox shared folders don't support inotify: Virtualbox ticket.

wcfSolution: use rsync to sync codes to virtualbox VM. See this github repo.

Set up proxy using docker behind a proxy.

Simply edit the /etc/default/docker file and change "http_proxy" there. Or set to use system proxy.

Set up private registry insecure registry.

Deploying a plain HTTP registry

  1. Open the /etc/default/docker file or /etc/sysconfig/docker for editing.
  2. Edit (or add) the DOCKER_OPTS line and add the --insecure-registry flag. For example. DOCKER_OPTS="--insecure-registry"
  3. Close and save the configuration file.
  4. Restart your Docker daemon
  5. Repeat this configuration on every Engine host that wants to access your registry.

Status restarting forever

SO: docker status restarting for ever.

# To reproduce:
docker run --name=test --restart=always debian /bin/bash

# Solution: You need to run /bin/bash interactively. Otherwise, it will exit and restart forever.
docker run -ti --name=test --restart=always debian /bin/bash