Basics to Kick Off Your Docker Journey: The Docker I understood.

Anuj A. Pachawadkar
11 min readMay 21, 2021

Couple of years before, when I was having 3–4 years of experience as a Software Engineer — mainly as a MEAN Stack developer, I was searching for better opportunities in the market as everyone does. In couple of interviews I was asked questions about the Docker. I was surprised and would think why am I being asked questions about Docker and Kubernetes repeatedly.

In our previous organisation which was a startup we would deploy our web applications on Amazon’s EC2 instances mainly of Ubuntu flavour. So I was kind of very used to the process and got certain amount of expertise in the same; So I was very comfortable deploying our applications including various servers, database instances, load balancers etc. So just focusing only on programming and basic engineering part, fortunately I got a chance in a reputed organisation as a Senior Software Engineer.

When actual work initiated, progressed ahead, I encountered to a certain problem in which I had to incorporate a third party open source project into our project. That third party project was provided in the form of a Docker image and related files. Docker was not ready to stop chasing me! Somehow I could manage to escape from it with the help of our DevOps guys. I am not a fan of running away from new technologies, but I was having fear to lose my focus from my current stream/technology stack and get into the DevOps.

After few months, we happened to install Docker on our local windows machine as management had decided to use it in our project. So this time after a year I had only two choices: To leave the organisation or Learn the Docker.

I went for latter one. I can surely say that after learning the Docker, I didn’t lose track of my current technology stack which was my biggest fear. Docker is easy to learn and understand. So the intention behind putting all this is to share my little knowledge about docker, the way I learned and help people like me who didn’t start learning docker yet due to such small reasons.

So Let’s start to have a look on some of the basics of Docker in simple language. We won’t be going into deep understanding of every concept, instead we’ll try to keep it as simple as possible.

Docker VS. Virtual Machines:

So Docker and Virtual Machines are both technologies of virtualisation. They mainly differ upon what part of Operating System they virtualise.

Docker virtualises Application Layer of an Operating System where as VM virtualises Application as well as Kernel Layer of an Operating System.

Operating System Layers

Docker:

Size: Size of the docker image will be lesser than that of VM. Docker images come within MBs.

Speed: Speed of execution of docker container is faster than VM.

Compatibility: Not all images are compatible with host OS.

Virtual Machines:

Size: Size of VM images are comparatively larger than that of docker as they also encapsulate/virtualise OS kernel. VM images come within size of GBs.

Speed: Speed of execution of VMs are slower than Docker.

Compatibility: All VMs can run on any OS unlike docker.

[Note: Skipping installation Process of Docker as you will find it on Docker’s official website: https://docs.docker.com/get-docker]

What is Image?

Image is a packaging technology which packages or wraps your application with all its requirements, metadata, installation code in a single image.

It contains all the necessary set of instructions to create a container that can run on docker platform. Image is portable and movable across teams.

It also makes development and deployment process of applications easier and efficient.

What is Container?

Container is a running instance of an image. We can have multiple containers running of the same image on a same host machine.

Each running container can be seen as a Micro-service.

Each running container has its own process, memory, file, network and so on.

Difference between Docker Image and Docker Container:

Docker Image:

  • Docker image is a static entity/asset/artefact.
  • It is movable across network/s.
  • It includes configuration and installation/setup information.
  • Images are stored on a systems disk.

Docker Container:

  • It is dynamic entity/running image OR Container is a running environment of an Image.
  • It is not movable across network/s.
  • It creates its own environment/wrapper around application with required installations and configurations.
  • Container runs on system’s memory.
  • Container has virtual file system.
  • Container binds applications port to one of the port of the host.

Where do container lives?

Container resides in container repository in the form of images.

  • Public Repositoryhub.docker.com
  • Self-hosted repository
  • Private/Third Party Repository — e.g. Amazon ECR (sometimes called registry)

What container is made up of?

Container is made up of multiple layers of stacked images. Most of the time image is created using linux/linux flavours such as alpine based image at the bottom of the stack due to less size of the image. On top of the stack there is an actual application image. There are also some intermediate images sandwiched between linux image and application image.

Stacked Images For Any Application Image

World Before Containers:

Before using containers, we would install all the required configurations and bunch of softwares on our system by downloading their executables/binaries. The problem we would come across for doing so was to deal with their versions, updates and maintaining consistency of the same across the team.

For example, If a NodeJS application may require to install PostgreSQL V9.6, Redis V6.2 etc, then you have to download their executables and install them on your operating system.

Moreover, these installations will differ according to your operating system and so across your team members. So you have to announce all the dependancies with their exact versions to your team members.

- Deployment Process:

  • Developer creates a build of an application along with its readme file containing installation and deployment information.
  • Similarly developer also creates readme file for installation of other required softwares for the application to run.
  • These artefacts are handed over to the DevOps team to setup on Deployment server.
  • Then DevOps team has to follow the same process developer had followed. Now DevOps team has to follow the process according to the deployment server’s system specifications which sometimes causes conflicts. There could also be misunderstandings between Developers team and DevOps team related to installations, versions etc.

World After Containers:

After containers, all our required softwares and configurations are packaged and installed inside an isolated environment with their specific versions. These packages can run on variety of operating systems.

It will require only one command and docker environment/runtime(also called docker engine/daemon) to install your application.

With containers, Same application can be run with two different versions on same physical machine.

- Deployment Process:

  • For deployment both developers and operations persons work together to package an application.
  • It doesn’t require environmental configuration on server apart from docker runtime.
  • You have to install docker runtime on server which is probably one time effort.
  • Then you just have to run container created on developer’s machine by pulling it from docker registry.

Docker Basic Commands:

- docker pull <image_name>:<version>
- It looks for image on your local system, if it is not found it downloads/pulls it from docker hub.
- If you didn’t specify version then latest version will be pulled.

- docker start <container_name>
- It runs a container
- docker restart <container_name>
- It restarts the running container
[Note: It removes all the configuration previously done for the container]
- docker run <image_name>:<version>
- It looks for image on your local system, if it is not found it downloads/pulls it from docker hub and runs it.
- docker run -d <image_name>
- It runs container in detached mode.
- docker run -d —name <custom_name> <image_name>
- It runs a container with custom name.
- docker ps
- It lists all running containers on your system.
- docker ps -a
- Lists all containers including running and stopped both.
- docker ps -a | grep <container_name>
- Lists all containers containing “container_name” as a container name.
- docker stop <container_name>
- It stops running container
- docker images
- It lists all recently created images present on your system.
- It provides list with Repository, Tag, Image Id, Created, Size attributes, where:
- REPOSITORY - Image name
- TAG - Version of the image
- IMAGE ID - Hexadecimal number
- CREATED - Creation time
- SIZE - Size of the image
- docker rm <container_id>
- It removes a container if it is not running.
- Before removing a container you have to stop it first if it’s running
- docker rmi <image_id>
- It removes an image

Command For Debugging Containers:

docker logs <container_name/container_id>
- This command can be used to see container logs.
docker logs <container_name/container_id> | tail
- To see last part of the container logs.
docker logs <container_name/container_id> -f
- To stream the container logs.
docker exec -it <image/container_name> bash
- To get into the terminal of running container.

[Note: Above commands are very general commands with general flags. There are many other commands in addition and as alternatives to above commands, you can find them on docker’s official website: https://docs.docker.com/engine/reference/commandline/docker/]

Container Port VS Host Port:

  • We can run two instances of any one application by using containers.
  • Applications are open to use on specific ports in containers.
  • Those containers need to be bound with ports of the host machine, So that the application running within container can be accessible outside docker environment.
  • following command can be used for port binding.
- docker run -p<host_port>:<container_port> <image_name>

Docker Networks:

  • Docker Network is a virtual network created by docker for multiple containers on same Docker host to communicate with each other by using just name instead of http://localhost:3*** etc.
  • With bridge network containers can work with each other by using only IP address not with container names.
  • Docker network can be created using following command:
docker network create <your-network-name>ORdocker network create —driver=bridge <custom_network_name>
  • To run a container within created network you can use following command:
docker run -p <host_port>:<container_port> —name <custom_container_name> -e <environment_variable>=<variable value> —net <network_name> <actual_container_name>
  • When containers run, they get connected to ‘Bridge’ network
  • To list containers running on specific network, following command is used:
docker network inspect <network_id>

[Note: We will not get into much of the docker networks in this article]

Docker Volumes for Data Persistence:

  • If you have any stateful applications like databases then all you use docker volumes for data persistence.
  • For example if there’s a database application running on a container and if you want to stop or restart the container then the data stored in a virtual file system of the container would be lost.
  • So with docker volume when you write date in virtual file system then that data is also replicated on a mounted directory of the Host(physical) File System.
  • So docker volume works as a plug between Virtual File System of docker and Physical File System of Host.
  • So when you stop and start or simply restart container it retrieves the data back from the Physical File System of Host.
  • Below there are 3 types or 3 ways to create docker volumes with three commands:

- Host Volumes

  • In this you explicitly state to which directory path of the host container directory to be mapped.
docker run -v <host_directory_path>:<container_directory_path>

- Anonymous Volumes

  • In this type, you do not state the host directory explicitly; instead docker itself takes care of choosing the directory for container data.
docker run -v <container_directory_path>

- Named Volumes

  • In this type you can give a name of the folder in host directory to which data can be stored by the container.
  • Path for the specified directory is again selected/chosen by the docker itself.
docker run -v <name_of_volume>:<container_directory_path>

- Docker Volumes in docker-compose:

- Docker Volume locations on host machine according to OS:

  • Windows — c:\ProgramData\docker\volumes
  • Linux — /var/libs/docker/volumes
  • Mac — /var/libs/docker/volumes
  • For Mac, docker creates a Linux virtual machine and stores docker data there, so if you want to check the data to the specified location you will have to execute following command:
screen  ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-   linux/tty
  • After executing above command you’ll actually get the terminal of linux VM where you can find docker information/data and list of volumes at “/var/libs/docker/volumes” path.
  • Each volume has its hash value as a name of the directory and in that _data folder to hold the data.

Using Docker Compose:

  • Lets take a look at some commands as an example to see how docker-compose utility is used:
1. docker run -p 6379:6379 —name my_redis —net my_network redis2. Docker run -p 27017:27017 —name mongoDB -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=password —net my_network mongo
  • Now lets take a look for equivalent yaml file/it’s content for docker-compose utility:
  • You have to save above contents in a yaml file e.g. example.yaml/example.yml.
  • To run above file you have to go to the location of the file from terminal/powershell for windows users and type following command:
docker-compose -f example.yaml up
  • To stop the containers you can use command as follows:
docker-compose -f example.yaml down
  • This command stops and removes above containers.
  • In addition to this, it also removes network created for above containers.

Creating Dockerfile:

What is a Dockerfile:

  • It is a blueprint for creating/building an image.
  • Docker image is always based on another image.
  • It becomes the part of an Application code.
  • You can see sample Dockerfile below
  • You have to save this file with the name “Dockerfile” without any extension.
  • In order to create an image from the above file we will have to execute following command:
docker build -t <image_name>:<tag_version <location_of_Dockerfile(without name”Dockerfile”)>
  • Example:
docker build -t myApp:1.0.0 .

[Note: Here ‘.’ Represents Dockerfile is at the current location/in working directory]

  • Any change in image will make you to rebuild an image.
  • you can run above created image using following command:
docker run myApp:1.0.0

Docker Registry:

A Registry is a storage system for docker images with different versions/tags. It also provides access for the same.

Creating Own Docker Registry:

For creating own docker registry we have to have docker and docker-compose installed on the machine on which you intend to create a Registry.

Then we will have to pull an image of registry server from docker hub(https:// hub.docker.com/_/registry), and for viewing those images on frontend, you will have to pull frontend image from the docker hub(https://hub.docker.com/r/konradkleine/docker-registry-frontend/).

Lets have a look on sample docker-compose file for registry server.

How to push docker image to the created registry server?

  • We can use following command to push our image on our registry server
docker push <host_name>:<port>/<username>/<image_name>:<tag/version>
  • Example
docker push abc.local.com:5000/rahul/mongodb-myimage:1.0

Sample Project Deployment Overview with Docker:

In general project CI/CD scenario with docker, if we are creating an application which is dependent on Redis following steps will take place:

  • We will download a public image of Redis from public docker hub.
  • Then we will create and configure our application to work with Redis.
  • We push our applications code to Version Control System (e.g. Git).
  • Then this code will be pulled by Jenkins server and a docker image will be built using this code and deployed to the docker private repository.
  • Then the server(Development/Testing/Staging) will pull this image from our private repository and pull Redis’ image from public repository and start the application.

Thanks for reading! Hope this article will help you to start learning Docker and increase more interest in the same. Your feedback is appreciated. All the best for your Docker journey, keep exploring and happy learning!

--

--