When Solomon Hykes founded dotCloud in 2008, his ambitions were quite different. The company, a Y Combinator Summer 2010 graduate, eventually pivoted in 2013, relaunching as Docker. 

Hykes took to the stage at PyCon in 2013, releasing the first demo for Docker. During that first talk, Hykes explained that Docker was simply the underlying technology that powered dotCloud and that the company was pivoting towards an open-source model. 

It didn’t take long for Docker to attract attention from industry heavyweights like IBM, Red Hat, and Microsoft. With a Docker container, you could develop software in a portable environment. 

Docker allowed developers to deploy, replicate, and easily port images to simplify workflows and introduce a level of flexibility that was simply not possible at the time.

There are several key components that combine to help you perform functions using Docker, and the image is just one of them. Apart from that, there’s the basic Docker client and the Docker daemon, both of which are required to make images work. 

But, what is a Docker image? How do you use it? Here’s everything you need to know.

What is a Docker Image?

A Docker image is simply a read-only file that’s used to run code within a Docker container. Think of it as a template that contains all the instructions needed to run the code. All of the code and dependencies are packaged in one file.

The Docker image contains all of the tools, packages, libraries, and source code required to run the software. These instructions can be used to build Docker containers, often containing multiple layers, with each one originating from the previous layer. 

Docker images can be deployed on any host, and they are reusable, allowing developers to take images from one project and use them in another, saving them considerable time and effort. 

Related: What is Docker And How Does it Work

Anatomy of a Docker Image

Layers are offshoots of instructions from the Dockerfiles stored in your local image cache. The local image cache forms the base for subsequent images you want to create. In general, some of the components of Docker images are listed below. 

  1. Base image

This helps you build Docker images from scratch. Base images give you control of all other Docker images. You can build a base image using the FROM scratch directive in the Dockerfile. Some examples of base images include Debian, Ubuntu, Redhat, and Alpine. 

  1. Parent image 

They’re the building block of Docker images. The FROM directive in the Dockerfile is used to create the parent image. In most cases, Dockerfiles are built from a parent image. 

  1. Layers

Also called “image layers”, these are the intermediate images that form Docker images. Through the layers, you can cache every step you take while creating Docker images. Layers also increase the reusability and the speed of creating Docker images. 

Layers are set in a hierarchical form, and each layer depends on the one preceding it. This is why it’s essential to keep layers susceptible to changes as high as possible in the stack list. Because should you change any layer, Docker will rebuild the specific layer and the preceding layers around it. 

  1. Container layer

This is the modifiable layer of a Docker image. It saves the changes you make to containers during operations. 

  1. Docker manifest

This provides information about Docker images in JSON format. To perform actions on manifest, you need to use either the “single manifest” or “manifest list” commands. Specifically, the single manifest describes the size, layers, operating system (OS), and architecture of a Docker image. 

A manifest list – also called “multi-arch image” or “fat manifest” – helps you group multiple images. After creating the list, you can use the group name of your list instead of the individual name of your image. This means you can use the docker pull or docker run command to pull desired Docker images.  

Docker Images vs. Containers

Before we proceed further, it’s important to understand the differences between a Docker image and a container. A container is a self-contained space that lets you run an application. 

Docker containers are completely isolated, so they don’t affect the system, and the environment they run in can’t affect the software either. A Docker image, on the other hand, runs the code within a container. 

A Docker image can exist outside of a container, but a container will need to execute an image for it to have something to “contain”. Therefore, a container is fully dependent on a Docker image to execute an application.

Think of an image as a template, so while it can exist independently, you can’t execute it. It’s important to mention that a container is just an executed image. As you build a container, it automatically creates another layer on top of the image, letting you modify the container layer (images are read-only). 

This also means that with the help of a single image base, you can easily create multiple Docker images. Over time, you’ll have images that contain different layers, with each iteration slightly similar to the previous one. 

How to Build Docker Images

Primarily, Dockerfiles are used to build Docker images. Dockerfiles contain the commands you need to build and customize Docker images. Every instruction you execute with Dockerfiles creates an intermediate layer. 

To create a Docker image, your first step would be to create a Dockerfile. Docker uses the Dockerfile to build images, so all instructions must be stored there. It’s a simple text file where you can add all the commands needed to create an image. 

To write one, you can use the simple Express application generator. Creating a basic Node.js app is a good way to start. Express application generator is a CLI (command-line interface) that lets you create basic app skeletons.

If you’re on Linux, just fire up the Terminal, and install the generator using the following commands:

$ npm install express-generator -g
$ express docker-app
$ npm install
$ npm start

Once installed, head to the root directory (where you saved the application) and create a simple text file. You can name it whatever you want, but let’s go with “Dockerfile”. 

Now, to create an image using this file, run the following commands:

# Filename: Dockerfile
FROM node:14-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
$ docker build .

Docker will now build the image. To check whether the image was created, you can run the docker images command.

Once you build a Dockerfile, you don’t need to rebuild an image manually. The table below contains the basic Dockerfile commands you need to build images.

CommandFunctions
FROMSpecifies the base image.
RUNSpecifies the shell command you want to execute in your image.
COPYUsed to import external files from a specified location.
ENVUsed to define environment variables.
EXPOSEDefines the port to access your container application.
LABELUsed to describe your image.
CMDUsed to execute a specific command within a container.

To build Docker images from Dockerfiles, follow these steps:  

  • Create your Dockerfiles: Here, you need to create a new file and directory for your Docker image. 
  • Run Docker build to build your Docker image. The build command uses instructions from specific files in a directory to build a Docker image. 
  • After creating the Docker image, use the Docker run command to create your container. 

Alternatively, you can use the interactive method to manually build Docker images from preexisting images. To do this, follow these steps: 

  • Open a terminal session after installing Docker. 
  • Use the Docker run command image_name:tag_name to start an interactive shell session with the container specified by the command. But there’s a caveat: Docker will automatically pull the most recent image version if you omit the tag name. If the images aren’t on any local file, Docker will build the container using resources from the Docker hub.

The interactive method is fast and easy to use, especially if you’re a newbie developer. But it can make you create unnecessary layers and multiple unoptimized images. 

In contrast, the Dockerfile approach is more flexible and easily integrates the continuous integration/continuous delivery (CI/CD) process. The Dockerfile approach is your go-to method if you’re looking to build enterprise-grade containers. 

You also have the option of exporting or loading your images through the save command. To download the image you just exported on another machine, you can then use the load command.

And, if you want to run your Docker image, you can use the following command:

$ docker run -i -t Dockerfile /bin/bash

You can replace the name with that of your image if you’ve renamed it.

How to Use Docker Images

Docker containers are the main use of Docker images. Images contain everything you need to create, deploy, and run your applications in a container. By extension, containers fly applications in different environments efficiently, especially microservice-based apps,. 

Improved CI/CD efficiency is another major use of Docker images. CI/CD is an automation principle used to integrate software changes. It uses a single repository to automate software development processes (building, testing, and deployment). 

Docker uses caching, the process of storing Docker image layers, to improve the CI/CD process. By storing every layer, caching increases the speed of creating lightweight Docker images. Lightweight images with a short build time make it fast and easy to deploy applications in different environments. 

While you can build a Docker image from scratch, as shown above, most developers prefer to pull images from different repositories. The Docker Hub has an extensive range of images available. 

You can then use the image base to create different Docker images too. But, it’s important to understand that there’s also the parent image, which is different from the base image. 

A base image is the empty container image, which you can use to eventually build an image from the ground up if you want. Parent images are pre-built images that offer some core functionality. 

For instance, a basic Linux system image or an image of WordPress might be considered a parent image. All the images that you find on Docker Hub are also parent images. 

How to Maximize Docker Image Security

The images that you use to build the container obviously play an important role in ensuring the overall safety of the container itself. In case the image is infected, the container will be too. 

It’s important to take certain security precautions when using Docker images. Here are some key points to keep in mind.

Only Use Verified and Signed Images

There are numerous third-party image repositories available, but ideally, you should steer clear of them. Instead, always use verified images from the Docker Hub to maintain project integrity.

Furthermore, it’s important that you only use signed images to mitigate your risk. In case someone tampered with the image, you’ll know right away. 

Use Minimal Images with No Unnecessary Libraries

Instead of using images that have several layers and contain various components that you won’t need, try to avoid downloading images that install additional system libraries that you won’t require. This can help you reduce your overall exposure.

Define a Privileged User

It’s important that you specify a USER for each Dockerfile. If you don’t, the container will run with root privileges on the host machine. That exposes your container to severe security issues and can lead to hackers eventually hacking into the host machine.

Regularly Check Images for Vulnerabilities

It’s important to note that vulnerabilities might be introduced as you continue to build new layers. While you may have checked the image originally, and verified it, make it a habit to check it regularly to identify issues and fix them at the earliest.

After Action Report – Working with Docker Images

Docker images are great because they are so lightweight and flexible. Interested in learning more? Join the conversation by commenting below, or send us a Tweet about how Docker has simplified software development for you!