Saturday, April 18, 2020

Docker and Docker-Compose - Knowledge Base

Docker Concepts

Docker 20.10.14


  • Image and Container

    • For a simple analogy, think of a Docker image as the recipe for a cake, and a container as a cake you baked from it.
    • A Docker image is a blueprint or a template for creating containers, and a Docker container is a running instance of that image.

  • Docker Image

    • It is a set of instructions that defines what should run inside a container.
      • A Docker image is made up of a series of read-only layers. Each layer is created by executing one or more instructions from the Dockerfile.
      • When you build an image, Docker reads the Dockerfile and executes the instructions one by one. Each instruction creates a new layer on top of the previous one.
      • Once all the instructions have been executed, the final image is created. This image includes all of the layers that were created during the build process, but it does not include the Dockerfile itself.
    • A Docker image typically specifies:
      • Which external image to use as the basis, unless the image is written from scratch;
      • Commands to run when the container starts;
      • How to set up the file system within the container;
      • Additional instructions, such as which ports to open on the container, and how to import data from the host system.
    • In most cases, the information described above is written in a Dockerfile. There are alternative, more complex methods to build a Docker image without a Dockerfile, such as Ansible playbooks.
    • This image provides a blueprint to deploy an executable container.
    • Distroless Container Images contain only your application and its runtime dependencies. They do not contain package managers, shells, or any other programs you would expect to find in a standard Linux distribution.
      • Pros:
        • Distroless images are lighter, which means faster pulling and pushing.
        • Security is also an important matter because you should try to decrease as much as you can your attack surface. You shouldn't have tools like sudo or ping in your container if you are not going to use them.
      • Cons:
        • If you want to debug your application inside the container you could profit from a shell and some other installed tools, but distroless doesn't have any of that.

  • Docker Container

    • A Docker Image provides a blueprint for the organization to deploy an executable container. Multiple containers can spin up from one image.
    • Are the way to execute that package of instructions in a runtime environment.
    • Containers run until they fail and crash, or are told to stop. It does not change the image on which it is based. If you update a container image, you won't change the containers that are already created or even running, you need to stop, remove and recreate the containers based on your updated image.
    • The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command. That is, `docker run` is equivalent to `docker create` and then `docker start`.
    • A stopped container can be restarted with all its previous changes intact using `docker start`.
    • Docker Container Lifecycle:

    • Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. See the "Automatically start containers" section below for more information.

  • Docker Registry

    • The Docker Registry is a server application that stores and lets you distribute Docker images.
    • A Docker registry is organized into repositories, where a repository holds all the versions of a specific image. The registry allows docker clients to pull images locally, as well as push new images to the registry.


Docker - Building Images

  • Build an image from the Dockerfile in the current directory (.) and tag the image:
    • docker build -f Dockerfile -t <image-name>:<tag> .
  • Build a multi-architecture image from the Dockerfile in the current directory, tag the images and push them to the DockerHub:
    • Preparing to build a multi-architecture image:
      • export DOCKER_CLI_EXPERIMENTAL=enabled
      • docker buildx create --name multiarch-builder
      • docker buildx use multiarch-builder
      • docker buildx ls
    • Then run the buildX command
      • docker buildx build -t marcusveloso/aws-kubectl:latest --platform linux/amd64,linux/arm64 [--push] -f Dockerfile .
    • Clean up:
      • docker buildx use default
      • docker buildx stop multiarch-builder
      • docker buildx rm multiarch-builder
  • Using host environment variable values to set ARGs and/or ENVs:
    • docker build --build-arg GITHUB_TOKEN=${HOST_ENV_VAR_NAME} -t test:v1 .
    • OR, without the environment variable name when it has the same name as the ARG/ENV variable:
      • docker build --build-arg GITHUB_TOKEN -t test:v1 .
    • Dockerfile example with ARGs and ENVs. It can have only ARGs, but cannot have only ENVs:
      • ARG GITHUB_TOKEN
      • ENV GITHUB_TOKEN $GITHUB_TOKEN
    • ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD). You can use ARG values to set ENV values to work around that.
    • ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced.
    • Defining default values for host environment variables:
      • ARG GITHUB_TOKEN=default-value
      • ENV GITHUB_TOKEN $GITHUB_TOKEN
      • However, we can override these values during the build process, like this:
        • docker build --build-arg GITHUB_TOKEN=another-value 
  • The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the  - flag on  docker run  to publish and map one or more ports, or the  -P  flag to publish all exposed ports and map them to high-order ports.
    • Exposing multiple ports in the same Dockerfile:
      • EXPOSE 80
      • EXPOSE 8080
  • How to prevent dialog during apt-get install:
    • ENV DEBIAN_FRONTEND noninteractive

Docker - Creating an image from a container

  • First, start a new container:
    • docker run --name <base-container-name> --entrypoint bash -it <image-name>:<tag>
      • Customize the container by installing software, creating files, etc...
  • Create a new image from a container’s changes:
    • docker commit <base-container-name> <docker-registry>/<image-name>:<tag>
    • docker push <docker-registry>/<image-name>:<tag>
  • Create (only once) a new container based on the new image created:
    • docker create --name <new-container-name> -p 8080:8080 -t -i -v /Users/marcus/shared/:/opt/shared <docker-registry>/<image-name>:<tag>
  • Start and access the new container:
    • docker start -i -a <new-container-name>
  • Start and access a new container using an image for a specific platform:
    • docker pull --platform linux/arm64 --entrypoint bash -it alpine:latest

Docker Container - Assigning a Port Mapping to a Running Container

  • Port mapping is used to access services running inside a Docker container. All requests made to the host port will be redirected to the Docker container. Sometimes, we may start a container without mapping a port that we need later on. In this case, we need to modify the existing docker container "in flight".
    • 1 - Get the docker container ID
      • docker inspect --format="{{.Id}}" <container-name>
    • 2 - Stop the Docker container
      • docker stop <container-name>
    • 3 - Stop the Docker service
      • sudo systemctl stop docker
      • OR
      • sudo snap stop docker
    • 4 - Go to the folder where docker saved the config files for that particular container
      • cd /var/lib/docker/containers/<container-id>
      • OR
      • cd /var/snap/docker/common/var-lib-docker/containers/<container-id>
      • OR find the folder location if it is not in one of the previous paths
      • sudo find / -name <container-id>
    • 5 - Update the hostconfig.json file
      • { ... "PortBindings": {"80/tcp":[{"HostIp":"","HostPort":"8080"}]}, ... }
    • 6 - Update the config.v2.json file
      • { ... "ExposedPorts": {"80/tcp":{}}, ... }
    • 7 - Start the Docker service
      • sudo systemctl start docker
      • OR
      • sudo snap start docker
    • 8 - Start the Docker container

Docker - Networks

  • Docker creates virtual networks which let the containers talk to each other. An application running in one Docker container can create a network connection to a port on another container.
  • The simplest network in Docker is the  bridge  network, which allows simple container-to-container communication by IP address between containers on the same host, and it is created by default.
  • Creating a user-defined network will allow containers to communicate with each other by their container names  or  network aliases. In a user-defined bridge network, we can be more explicit about who joins the network.
  • When using user-defined networks, we need to explicitly connect a Docker container to the created network by using the  --network <network-name>  option when running/creating the container.
  • We can define a network alias for each container using the  --network-alias <alias>  option when running/creating the container. 

Docker Management

  • Docker version:
    • docker -v 
    • docker version
  • Docker system-wide information:
    • docker info
  • Docker disk space usage:
    • docker system df
  • Managing Docker Service:
    • sudo systemctl stop docker.service
    • sudo systemctl restart docker.service
    • sudo systemctl start docker.service
  • List the Docker networks:
    • docker network ls
  • Create a Docker network:
    • docker network create <network-name>
  • Removing a network:
    • docker network ls
    • docker network inspect <network-name>
    • docker network disconnect -f <network-id> <endpoint-name>
    • docker network rm <network-id>
    • sudo service docker restart
  • Listing and Removing dangling images:
    • docker image ls -f dangling=true
    • docker image prune
      • WARNING! This will remove all dangling images.
      • Are you sure you want to continue? [y/N]
      • Total reclaimed space: X GB
  • Listing and Removing dangling volumes:
    • docker volume ls -f dangling=true
    • docker volume prune
      • WARNING! This will remove all local volumes not used by at least one container.
      • Are you sure you want to continue? [y/N]
      • Total reclaimed space: X GB
  • Removing stopped containers:
    • docker container prune
      • WARNING! This will remove all stopped containers.
      • Are you sure you want to continue? [y/N]
      • Total reclaimed space: X GB
  • Removing build caches:
    • docker builder prune
      • WARNING! This will remove all dangling build cache. Are you sure you want to continue? [y/N]
      • Total reclaimed space: X GB
  • Cleaning Everything at Once. Removing all stopped containers, all networks not used by at least one container, all dangling images, and all dangling build caches
    • docker system prune
      • WARNING! This will remove:
      •   - all stopped containers
      •   - all networks not used by at least one container
      •   - all dangling images
      •   - all dangling build cache
      • Are you sure you want to continue? [y/N]
      • Total reclaimed space: X GB
  • Removing an Image:
    • docker rmi <image-id>
    • docker rmi <namespace/image-name>:<tag>
  • Tag an image with a new name:tag:
    • docker tag localhost:5000/<image-name>:<tag> cloud.canister.io:5000/<namespace>/<image-name>:<tag>
  • Push an image to a Registry:
    • docker push cloud.canister.io:5000/<namespace>/<image-name>:<tag>
  • Pull an image from a Registry:
    • docker pull <container-registry>/<namespace>/<image-name>:<tag>
    • The docker client will try to pull down the image according to the platform it is running on (e.g.: amd64, arm64, etc.). The command above will pull down an image for a specified platform, regardless of the platform where the docker client is running:
      • docker pull --platform amd64 <container-registry>/<namespace>/<image-name>:<tag>
  • Show image SHA ID:
    • docker images --digests
    • docker inspect --format='{{index .RepoDigests 0}}' <image-name>
  • To enable Experimental features in the Docker CLI (AKA Edge version):
    • sudo nano /etc/docker/daemon.json
      • { "experimental": true }
  • To enable Insecure Registry:
    • sudo nano /etc/docker/daemon.json
      • { "insecure-registries" : ["myregistrydomain.com:5000"] }


Docker Registry API V2

  • Listing repositories. Retrieve a sorted, json list of repositories available in the registry.
    • https://<docker-registry.url>/v2/_catalog
  • Listing image tags:
    • https://<docker-registry.url>/v2/<image-name>/tags/list
  • Pulling an image manifest:
    • https://<docker-registry-url>/v2/<image-name>/manifests/<tag-or-digest>


Docker - Running Containers

  • List running containers:
    • docker ps
    • docker ps -a (view a list of all containers)
  • Expose a port inside the container:
    • docker container run -name <container-name> -p <host-port>:<container-port> <image:tag>
  • Run a container from the Alpine version 3.9 image, name it “test” and expose port 5000 externally, mapped to port 80 inside the container:
    • docker container run --name test -p 5000:80 alpine:3.9
  • Run the latest Mosquitto server detached, name it "broker" and expose it on port 1883:
    • docker run --name broker -p 18983:1883 -d eclipse-mosquitto
  • Create and Start a Container:
    • docker container create --name=mysql-1 -p 3306:3306 -e MYSQL_RANDOM_ROOT_PASSWORD=yes mysql:5.7.26
    • docker container start mysql-1
  • To run a docker image forcing an entrypoint:
    • docker run --entrypoint "/bin/bash" -it <image-name>
    • AND connect to a specific docker network:
    • docker run --entrypoint bash -it --network <network-name> <image-name>
    • AND map a local folder as a container folder:
    • docker run --entrypoint bash -it -v /local-folder:/container-folder <image-name>
    • AND remove the container after the execution:
    • docker run --rm --entrypoint bash -it <image-name>
    • AND pass a parameter to the container:
    • docker run --name mongodb-4.4 -p 27044:27017 -v /Users/marcus/mongodata-v44:/data/db -d mongo:4.4 --replSet rs1 
  • Start containers automatically:
    • Start a container and configure it with a restart policy:
      • docker run -d --restart <restart-policy> <image-name>
    • Show the container restart policy:
      • docker inspect <container-name>
      • docker inspect -f "{{ .HostConfig.RestartPolicy }}" <container-name>
    • Change the restart policy for an already running container:
      • docker update --restart <restart-policy> <container-name>
        • eg:
          • docker update --restart always jenkins-docker
    • Restart policy options:
      • no
        • Do not automatically restart the container. (the default)
      • on-failure[:max-retries]
        • Restart the container if it exits due to an error, which manifests as a non-zero exit code. Optionally, limit the number of times the Docker daemon attempts to restart the container.
      • always
        • Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts.
      • unless-stopped
        • Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
  • Methods to Keep the Container Running using  docker run  command:
    • docker run -d ubuntu sleep infinity
    • docker run -d ubuntu tail -f /dev/null
    • docker run -d -t ubuntu
  • Method to Keep the Container Running using CMD in Dockerfile:
    • CMD ["tail", "-f", "/dev/null"]
  • Interacting with the container when it's running:
    • docker exec -it <container-name> bash
    • docker exec -it <container-name> "/bin/bash"
    • docker exec -it <container-name> "/bin/sh"
  • Copy files/folders between a container and the local filesystem
    • docker cp ./some_file <container-name>:/some_folder
    • docker cp <container-name>:/var/logs/ /tmp/app_logs
  • Print the container’s log:
    • docker container logs --tail 100 <container-name>
    • docker logs -n 100 <container-name>
  • Inspect a Container:
    • docker inspect my-container
  • Show ENTRYPOINT and CMD commands in the image:
    • docker inspect -f '{{.Config.Entrypoint}}' <image:tag>
    • docker inspect -f '{{.Config.Cmd}}' <image:tag>
  • To enable Experimental features in the Docker CLI (AKA Edge version):
    • sudo nano /etc/docker/daemon.json
      • { "experimental": true }
  • To enable Insecure Registry:
    • sudo nano /etc/docker/daemon.json
      • { "insecure-registries" : ["myregistrydomain.com:5000"] }
  • View the packaged-based Software Bill Of Materials (SBOM) for an image:
    • Install the docker-sbom plugin:
      • curl -sSfL https://raw.githubusercontent.com/docker/sbom-cli-plugin/main/install.sh | sh -s --
    • docker sbom <image:tag>
    • docker sbom <image:tag> --output <sbom-text-file>


Docker Compose


  • Start docker-compose services:
    • docker-compose up
    • Run containers in the background. Detached mode:
    • docker-compose up -d
    • Rebuilding the images before starting the containers:
    • docker-compose up --build --force-recreate
  • Stop docker-compose services:
    • docker-compose stop [<SERVICE> ...]
  • Start specific services:
    • docker-compose start [<SERVICE> ...]
  • Docker compose up vs start:
    • docker-compose up is used when you want to create and start all the services in your Docker Compose configuration from scratch or when you want to rebuild the images and recreate the containers if there have been any changes.
    • docker-compose start is useful when you have already created the containers using docker-compose up or a similar command, and you want to start them again after they have been stopped 
  • Pause a running containers of a service. They can be unpaused with docker-compose unpause. A paused container does not release its allocated resources.
    • docker-compose pause [<SERVICE> ...]
  • Resume a paused containers of a service.
    • docker-compose unpause [<SERVICE> ...]
  • Stop running containers without removing them. Any resources allocated to it such as memory are released. They can be started again with docker-compose start:
    • docker-compose stop [<SERVICE> ...]
  • Forces running containers to stop by sending a SIGKILL signal. You can start it again just like you start a container that was properly stopped.
    • docker-compose kill [<SERVICE> ...]
  • Removes stopped service containers:
    • docker-compose rm <SERVICE>
    • Don't ask to confirm removal:
    • docker-compose rm -f <SERVICE>
    • Stop the containers, if required, before removing:
    • docker-compose rm -s <SERVICE>
  • List containers:
    • docker-compose ps
  • Logs - View output from containers:
    • docker-compose logs <OPTIONS> [<SERVICE>...]
  • Rebuilding the image without starting the container:
    • docker-compose build [--no-cache] [<SERVICE> ...]
  • Stop and remove containers, networks, images, and volumes:
    • docker-compose down
  • Validate and view the Compose file:
    • docker-compose config
  • Set the number of containers for a service:
    • docker-compose up --scale <SERVICE>=<NUM>
  • Run arbitrary commands in your services. Commands are by default allocating a TTY, so we don't need  -it  option as in docker run command:
    • docker-compose exec <SERVICE> sh
    • docker-compose exec <SERVICE> bash
  • How to update one image and its container (ex. `api_test`):
    • Stop all services:
      • docker-compose stop
    • Remove the container:
      • docker rm -f <launch-folder>_<container-name>_1  (e.g. iotee_api_test_1)
    • List all images:
      • docker images
    • Remove the image you want to update:
      • docker rmi -f <launch-folder>_<image-name>  (e.g. api_test:latest)
    • Restart all services:
      • docker-compose up -d

Docker Configuration files

  • File used to store docker login credentials:
    • ~/.docker/config.json
  • File used to configure docker daemon:
    • /etc/docker/daemon.json
  • File used to store the container configuration:
    • /var/lib/docker/containers/<container-id>/hostconfig.json
    • /var/snap/docker/common/var-lib-docker/containers/<container-id>/hostconfig.json

How to open/edit/bind ports to running Docker Containers

  • Stop the running Container:
    • docker stop <container-id>
  • Open the Docker containers directory:
    • cd /var/lib/docker/containers/<container-id>
    • OR:
    • cd /var/snap/docker/common/var-lib-docker/containers/<container-id>
  • Edit file hostconfig.json:
    • nano hostconfig.json
    • Locate and edit PortBindings with the new ports you want to edit, open or delete.
      • "PortBindings":{"50000/tcp":[{"HostIp":"","HostPort":"50000"}],"8080/tcp":[{"HostIp":"","HostPort":"80"}]}
  • Restart the Docker:
    • sudo systemctl restart docker
    • OR:
    • sudo snap restart docker
  • Start the container:
    • docker start <container-id>

Docker Registry and Repository

  • Free private docker registry and repository:
    • TreeScale
      • Unlimited private and public repositories
      • 500 Pull actions/month 
      • 50 GB Registry space
    • Canister
      • 20 private repositories
      • Unlimited public repositories
    • Docker Hub
      • 01 private repositories
      • Unlimited public repositories

Troubleshooting

  • Error: "At least one invalid signature was encountered."
    • Cause:
      • Would be related to available disk space to store the images.
    • Solutions:
      • Increase de Docker Image space using Docker Preferences -> Resources -> Disk image size
      • OR docker image prune
      • OR docker container prune
      • OR docker builder prune
      • OR sudo apt clean

References