The IT industry is moving more and more toward running programs in containers as opposed to virtual machines. In the recent history of the software industry, the technology is regarded as one of the ones with the fastest growth. Docker, a platform for writing Docker files that enables users to quickly pack, deploy, and manage applications within containers, is at the core of this system. It is an open-source initiative that, in other words, automates the deployment of applications inside of software containers.
This article discusses Dockerfile, which is a tool used frequently in DevOps engineering. DevOps, which mixes software development and IT operations, is essentially a collection of procedures that guarantees the systems development life cycle and offers continuous delivery with excellent software quality. Continuous deliveries are typically handled by DevOps engineers in businesses. They are relying on Docker in this situation. Dockerfile is what we use to operate with Docker. If you’re not familiar with Dockerfile, don’t worry; this post will explain it to you simply. Additionally, the examples assist you in having the finest experience. Now let’s go to the article.
By using containers, Docker actually makes it simpler to build, distribute, and operate programs. Containers also enable developers to package up an application with all of its necessary components, such as libraries and other dependencies, and send it out as a single package. By doing this, the developer may be sure that the program will function on any other Linux machine, independent of any unique settings that may be present there that are different from those on the computer for creating and testing the program.
Docker and Dockerfiles
What is Docker?
You build, deploy, and manage containerized applications using the free and open-source Docker platform. This container-related ecosystem is for building and running containers. Software called Docker is used as a container runtime. The components of the Docker ecosystem include Client, Server, Machine, Image, Hub, and Compose.
Why Dockerfile and how does it work?
A Dockerfile is a plain text file that contains instructions on how to create an image. If the Docker file is missing, we use the command line interface to build the image and run the container sequentially while still meeting our needs. However, this Dockerfile enables us to provide guidance on what must be retrieved, what commands must be executed after the image has been created, and what configurations must be provided.
Why use Dockerfile?
Use Dockerfile to remove superfluous content from an image to make clean images. You can use it to carry out identical actions numerous times to make and recreate the images.
[Please Note: Reproducing the image from the exact same file might result in some errors. For example, if the Dockerfile instructs users to download the most recent version of Python, and one user runs the Dockerfile and receives Python 3.1 while another user receives Python 3.2 at a later time, the system may break as a result of the dependencies. So the best course of action is to sometimes make modest changes to the Dockerfile.]
How it works
This image depicts the Dockerfile’s skeleton. In order for Docker to recognize the file as a Docker file, we must generate the Dockerfile. We must adhere to the instructions given if we need to create a file with a different name (or any file).
Dockerfile Make-Up
I’m going to demonstrate the fundamental commands in the Docker file here. This list of commands is frequently used in the Docker file.
- FROM – Options for the base image include Redis, MySQL, Ubuntu, etc.
- LABEL – such as EMAIL, AUTHOR, etc.
- RUN – This command instructs the container what to perform after it has been created from an image. For instance, rm -fr, apk add — update-redis. Use the cp command in the RUN command to run a file that is inside the container externally if necessary. RUN apk del tzdata && cp /usr/share/zoneinfo/Asia/Colombo /etc/localtime && echo “Asia/Colombo”
- COPY – Copy the host system’s files. dest: container destination path src: source path
- ADD – This command is similar to COPY in that it downloads tar, zip, or web files, extracts them, and then copies them inside of our image.
- WORKDIR – used to provide the directory in which we will work. When adding files from the host’s local machine and saving them in the container, the default directory is the working directory path.
- ENTRYPOINT – The command that runs inside the container is called the entrypoint. as in the bash shell in Ubuntu, the server running command in the `httpd` container
The best way to grasp Docker and Dockerfile better is to conduct lots of practical work. Let us work through three stacks (Java, Python, Go and Javascript) to understand them:
Docker for Java
You need to provide a directory to organize files. The command to create a directory is as follows.
`$ mkdir <directory>`.
Change into this directory and create your simple Java app. You would need to write a Dockerfile, which gives instructions for the Docker, after creating your Java file. No file extensions are present in the Dockerfile. So, save it simply with the name of the Dockerfile.
FROM java:8 COPY ./var/www/java WORKDIR /var/www/ RUN javac Hello.java CMD ["java", "Hello"]
As a matter of convention, write all instructions in uppercase. Place the file in the newly formed directory. Your Java app and Dockerfile are now running simultaneously inside the directory.
Docker for Python
First set up the Dockerfile with a series of instructions for creating the Docker image. You’ll utilize the Python environmental variable `pythonunbuffered` for this. Set this to a non-empty string or run with the -u command-line option; This enables Python output to transmit directly to the terminal. This is helpful for real-time log messages. Additionally, it avoids problems like application crashes without providing pertinent information since the message is “trapped” in a buffer.
Make a directory for the project and enter it by typing `cd <directory>. To build a virtual environment, execute the commands listed below. This isolates the Python project’s environment so there won’t be interference with or by other Python projects operating in the local environment. Installed dependencies won’t conflict with other Python projects.
python -m venv <directory>
source <directory>/bin/activate
Create a new Dockerfile file with the following code and place it in the project directory’s empty folder:
FROM python:3.8-slim-buster ENV PYTHONUNBUFFERED=1 WORKDIR /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD [ "python", "-m" , "django", "run", "--host=0.0.0.0", "--port=5000"]
This code guarantees that the output transmits directly to the terminal and pulls the base image from `python:3.8-slim-buster`. It certifies the location of the current working directory, the copying of the Python application to the current directory, and the installation of the requirements.txt package list.
Docker for Go
Let’s first create the `sample.go` file, with a Dockerfile, and launch the program with the command `go mod init`. This is how we organize our project:
simple-docker-app
|- sample.go
|- Dockerfile
In the simplest words, an image is your app’s definition and everything required to run the program. You must include steps in the configuration file to create a Docker image. Dockerfile is a common and preferred file name; however, you can use whatever name you like. However, in my opinion, adhering to standards is always preferable.
Write the following code in the Dockerfile file you created.
# Build Stage # First pull Golang image FROM golang:1.17-alpine as build-env # Set environment variable ENV APP_NAME sample-dockerize-app ENV CMD_PATH main.go # Copy application data into image COPY . $GOPATH/src/$APP_NAME WORKDIR $GOPATH/src/$APP_NAME # Build application RUN CGO_ENABLED=0 go build -v -o /$APP_NAME $GOPATH/src/$APP_NAME/$CMD_PATH # Run Stage FROM alpine:3.14 # Set environment variable ENV APP_NAME sample-dockerize-app # Copy only required data into this image COPY --from=build-env /$APP_NAME . # Expose application port EXPOSE 8081 # Start app
Conclusion
After viewing these examples, you should feel confident generating a Dockerfile and customizing the instructions to generate and run an image. Additionally, you now see that we can complete our task using a lightweight picture rather than an application. We can create an application numerous times using a Dockerfile. You can build a large applications by configuring the Dockerfile if you delve deeply into it.