Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Overview of Docker and Containers

In this video, I’m going to give you a brief overview of Docker. We’ll look at what is Docker, why containers. We’ll look at hosts and clients, discuss images and containers, networking, Dockerfiles, and registries. So what is Docker? You’re probably aware of virtualization technology, which lets us take a physical machine, which may be very underutilised, and create a number of virtual machines to give much better usage of our hardware. Each virtual machine has its own operating system, hard drive, memory, CPU, and networking. From the perspective of the virtual machine, it’s a full server. It doesn’t know that it’s just virtual.
Similarly, we could take a server, be that a physical server or a virtual machine, split it into a number of containers. Again, each container has its own operating system, hard drive, memory, CPU, networking, registries, and so on. And, again, from the perspective of the container, it’s a complete server. Usually, containers are stripped down versions of an operating system with just the bare minimum in order to run applications, so why would we want containers? In order to answer that question, we’ll first have to take a brief intro to what Docker images are. Docker images almost like virtual hard drives from virtual machines. All Docker images are layered. The first layer is typically the operating system layer.
We can then build on top of that layer. For example, we can instal the Nginx application on top of the operating system layer. We can then instal a website on top of Nginx. If we have another image, we can use the same base operating system layer and instal MongoDb. However, the way that Docker works, we don’t actually have two different OS layers. Docker’s able to use the same operating system layer for both images, so we’re saving on space. From the perspective of each image, they have an operating system layer. They don’t know or care that it’s shared with other images. Now that we know a little bit about images, we can answer the question of why containers.
The primary reason is density. Firstly, each image and container takes up a much smaller footprint than a virtual machine. And as we’ve just seen, containers also share images. This provides us with huge efficiencies in terms of storage. Also, deployment becomes very consistent. Because Docker images are shared, if we build on a particular image layer, we can ship the diff over to another developer, and as long as he puts that diff onto the same base image, he’ll have exactly the same application. Let’s take a look at Docker hosts and clients. A Docker host is a machine that is running the Docker engine or Docker daemon. Typically, this has been on Linux flavoured machines since the Linux kernel supported containerisation.
Recently, however, Windows Server 2016 Technical Preview 3 was released, which also supports containerisation for Windows containers. The host then also contains a number of images. Those images can be spun up to run as containers. The client interacts with the host using a REST API under the hood. Typically, clients use a command line interface, which reps the REST calls. The most common one is the Docker command line interface or the Docker client. If you’re using a Windows Server Docker host, you can also use PowerShell to interact with the Docker engine. The Docker commands let the client tell the Docker engine what to do. For example, pull an image from a repository, list images, start containers, or list running containers.
So what’s the difference between an image and a container? Well, the images are like virtual hard drives in the virtual machine world. Images are layered. We can start with a base image. For example, an operating system layer and then instal an application, like Nginx. We can then save that as a new image. We can then make some configuration changes, and again, save it as a new image. If we instal website A, we can then save that as an image with four layers. Now, if we want to run website A, we can create a container based on that image running the website. If we want another instance, we just spin up another container from the same image.
If we’re happy with the base three layers, we may decide to instal website B. Instead of starting from scratch, again, we can just deploy the bottom three layered image and then configure website B on top of that base image. Let’s take a brief look at networking in Docker. You’re able to Docker with two containers, Nginx and MongoDb. Docker networking allows the host to create a pipe between the host port and a port exposed on the container. For example, we may expose port 5001 on the Nginx container and port 80 on the host. Docker then creates a pipe that takes any traffic on port 80 or the host and forwards it to port 5001 on the Nginx container.
Containers can also talk to each other through links without having to expose ports. There are two ways to build images. We can run a container, modify it, and then save it back as an image. Or we could create what’s called a Docker file. This is just a text file with instructions that tell the host how to build an image. It’s used with the Docker build command. And since it’s a text file, it’s usually stored in source control with your application. Here’s an example of a Docker file. Docker files always start with FROM instruction, telling the host what the base image for this particular image is going to be. There are two parts to the FROM.
There’s the identifier, in this case, microsoft/, and then the version. For example, 1.0.0-beta6. This is actually the version of the official image on Ubuntu 1404 in the Docker registry. Any line prefixed with the hash is a comment. We can provide metadata, such as the maintainer instruction. We can set environment values using the end of instruction. We can copy files from the host machine onto the image using the add instruction. Here, we’re copying the entire directory that the Docker file is in and all it subdirectories into a directory called app in the image. The EXPOSE instruction tells Docker to expose this particular port, in this case, 5001, on the image when it is running as a container.
WORKDIR changes directories in the image. The CMD instruction tells Docker which command to run when this image is instantiated as a container. In this case, we’re running the kestrel command in the app directory. Finally, let’s have a look at registries. Registries are a repository of images. We can push and pull to and from the repositories. The public Docker registry is called, and this contains many official images. For example, here is the Nginx official image, which is created and maintained by the authors of the Nginx application. It’s also possible to create private registries. In this video, we looked at what is Docker and what containers are.
We looked at hosts and clients, spoke about images and containers, and then briefly looked at networking, Docker files, and registries.

Colin from Northwest Cadence returns to briefly introduce us to Docker. In the video, we will look at the following:

  • What is Docker?
  • Why containers?
  • Hosts and clients
  • Images and containers
  • Networking
  • Dockerfile
  • Registries

In the next step, we will learn how to build, push and run Docker images using VSTS.

This article is from the free online

Microsoft Future Ready: Fundamentals of DevOps and Azure Pipeline

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now