cancel
Showing results for 
Search instead for 
Did you mean: 
Login & Join the DevCentral Connects Group to watch the Recorded LiveStream (May 12) on Basic iControl Security - show notes included.
Related articles:

DevOps Explained to the Layman
Containers: plug-and-play code in DevOps world - part 1

Quick Intro

In part 1, I explained containers at a very high level and mentioned that Docker was the most popular container platform.

I also added that containers are tiny isolated environments within the same Linux host using the same Linux kernel and it is so lightweight that it packs only the libraries and dependencies just enough to get your application running.

This is good because even for very distinct applications that require a certain Linux distro to run or different libraries, there should be no problem at all.

If you're a Linux guy like me you'd probably want to know that the most popular container platform (docker) uses dockerd as the front-line daemon:

root@albuquerque-docker:~# ps aux | grep dockerd
root 753 0.1 0.4 1865588 74692 ? Ssl Mar19 4:39 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

This is what we're going to do here:

  • Running my hello world app in traditional way (just a simple hello world!)
  • Containerising my hello world app! (how to pack your application into a Docker image!)
  • Quick note about image layers (brief note about Docker image layers!)
  • Running our containerised app (we run the same hello world app but now within a Docker container)
  • Uploading my app to online Registry and Retrieving it (we now upload our app to DockerHub so we can pull it from anywhere)
  • How do we manage multiple container images (what do we do if our application is so big that we've got lots of containers?)

Running my hello world app in traditional way

There is no mystery running an application (or a component of an application) in the traditional way.

We've got our physical or virtual machine with an OS installed and you just run it:

root@albuquerque-docker:~# cat hello.py
#!/usr/bin/python3
print('hello, Rodrigo!')
root@albuquerque-docker:~# ./hello.py
hello, Rodrigo!

Containerising my hello world app!

Here I'm going to show you how you can containerise your application and it's best if you follow along with me.

First, install Docker.

Once installed the command you'll use is always docker <something> ok?

In DevOps world things are usually done in a declarative manner, i.e. you tell docker what you want to do and you don't worry much about the how.

With that in mind, by default we can tell Docker in its default configuration file (Dockerfile) about the application you'd like it to pack (pack = creating an image):

root@albuquerque-docker:~# cat Dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y
ADD hello.py /
CMD [ "./hello.py" ]
root@albuquerque-docker:~#

FROM: tell docker what is your base image (don't worry, it automatically downloads the image from Dockerhub if your image is not locally installed)

RUN: Any command typed in here is executed and becomes part of base image

ADD: copies source file (hello.py) to a directory you pick inside your container (/ in this case here)

CMD:  Any command typed in here is executed after container is already running So, in above configuration we're telling Docker to build an image to do the following:

  • Install ubuntu Linux as our base image (this is not the whole OS, just the bare minimum)
  • Update and upgrade all packages installed and install python3
  • Add our hello.py script from current directory to / directory inside the container
  • Run it
  • Exit, because the only task it had (running our script) has been completed

Now we execute this command to build the image based on our Dockerfile:

Note: notice I didn't specify Dockerfile in my command below. That's because it's the default filename so I just omitted it.

root@albuquerque-docker:~# docker build -t hello-world-rodrigo .
Sending build context to Docker daemon    607MB
Step 1/4 : FROM ubuntu:latest
 ---> 94e814e2efa8
Step 2/4 : RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y
 ---> Running in a63919569292
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
.
. <omitted for brevity>
.
Reading state information...
Calculating upgrade...
The following packages will be upgraded:
  apt libapt-pkg5.0 libseccomp2 libsystemd0 libudev1
5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 2268 kB of archives.
After this operation, 15.4 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libudev1 amd64 237-3ubuntu10.15 [54.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libapt-pkg5.0 amd64 1.6.10 [805 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libseccomp2 amd64 2.3.1-2.1ubuntu4.1 [39.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apt amd64 1.6.10 [1165 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsystemd0 amd64 237-3ubuntu10.15 [205 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 2268 kB in 3s (862 kB/s)
(Reading database ... 4039 files and directories currently installed.)
Preparing to unpack .../libudev1_237-3ubuntu10.15_amd64.deb ...
.
. <omitted for brevity>
.
Suggested packages:
  python3-doc python3-tk python3-venv python3.6-venv python3.6-doc binutils
  binfmt-support readline-doc
The following NEW packages will be installed:
  file libexpat1 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib
  libpython3.6-minimal libpython3.6-stdlib libreadline7 libsqlite3-0 libssl1.1
  mime-support python3 python3-minimal python3.6 python3.6-minimal
  readline-common xz-utils
0 upgraded, 18 newly installed, 0 to remove and 0 not upgraded.
Need to get 6477 kB of archives.
After this operation, 33.5 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.0g-2ubuntu4.3 [1130 kB]
.
. <omitted for brevity>
.
Setting up libpython3-stdlib:amd64 (3.6.7-1~18.04) ...
Setting up python3 (3.6.7-1~18.04) ...
running python rtupdate hooks for python3.6...
running python post-rtupdate hooks for python3.6...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Removing intermediate container a63919569292
 ---> 6d564b46521d
Step 3/4 : ADD hello.py /
 ---> a936bffc4f17
Step 4/4 : CMD [ "./hello.py" ]
 ---> Running in bea77d51f830
Removing intermediate container bea77d51f830
 ---> e6e4f99ed9f3
Successfully built e6e4f99ed9f3
Successfully tagged hello-world-rodrigo:latest

That's it. You've now packed your application into a docker image!

We can now list our images to confirm our image is there:

root@albuquerque-docker:~# docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
hello-world-rodrigo            latest              e6e4f99ed9f3        2 minutes ago      155MB
ubuntu                         latest              94e814e2efa8        2 minutes ago      88.9MB
root@albuquerque-docker:~#

Note that Ubuntu image was also installed as it is the base image where our app runs.

Quick note about image Layers

Notice that Docker uses layers to be more efficient and they're reused among containers in the same Host:

root@albuquerque-docker:~# docker inspect hello-world-rodrigo | grep Layers -A 8
"Layers": [
"sha256:762d8e1a60542b83df67c13ec0d75517e5104dee84d8aa7fe5401113f89854d9",
"sha256:e45cfbc98a505924878945fdb23138b8be5d2fbe8836c6a5ab1ac31afd28aa69",
"sha256:d60e01b37e74f12aa90456c74e161f3a3e7c690b056c2974407c9e1f4c51d25b",
"sha256:b57c79f4a9f3f7e87b38c17ab61a55428d3391e417acaa5f2f761c0e7e3af409",
"sha256:51bedea20e25171f7a6fb32fdba24cce322be0d1a68eab7e149f5a7ee320290d",
"sha256:b4cfcee2534584d181cbedbf25a5e9daa742a6306c207aec31fc3a8197606565"
]
},

You can think of layers roughly like first layer is bare bones base OS for example, then second one would be a subsequent modification (e.g. installing python3), and so on.

The idea is to share layers (read-only) with different containers so we don't need to create a copy of the same layer. 

Just make sure you understand that what we're sharing here is a read-only image. Anything you write on top of that, docker creates another layer!

That's the magic!

Running our Containerised App 

Lastly, we run our image with our hello world app:

root@albuquerque-docker:~# docker run hello-world-rodrigo
hello, Rodrigo!
root@albuquerque-docker:~#

As I said before, our container exited and that's because the only task assigned to the container was to run our script so by default it exits.

We can confirm there is no container running:

root@albuquerque-docker:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
root@albuquerque-docker:~#

If you want to run it in daemon mode there is an option called -d but you're only supposed to run this option if you're really going to run a daemon.

Let me use NGINX because our hello-world image is not suitable for daemon mode:

root@albuquerque-docker:~# docker run -d nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
f7e2b70d04ae: Pull complete
08dd01e3f3ac: Pull complete
d9ef3a1eb792: Pull complete
Digest: sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a
Status: Downloaded newer image for nginx:latest
c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c

Yes, you can run docker run command and it will download the image and run the container for you.

Let's just confirm our container didn't exit and it is still there:

root@albuquerque-docker:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
c97d363a1cc2        nginx               "nginx -g 'daemon of…"   5 seconds ago       Up 4 seconds        80/tcp              amazing_lalande

Let's confirm we can reach NGINX inside the container.

First we check container's locally assigned IP address:

root@albuquerque-docker:~# docker inspect c97d363a1cc2 | grep IPAdd
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",

Now we confirm we have NGINX running inside a docker container:

root@albuquerque-docker:~# curl http://172.17.0.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

At the moment, my NGINX server is not reachable outside of my host machine (172.16.199.57 is the external IP address of our container's host machine).

rodrigo@ubuntu:~$ curl http://172.16.199.57
curl: (7) Failed to connect to 172.16.199.57 port 80: Connection refused

To solve this, just add -p flag like this: -p <port host will listen for external connections>:<port our container is listening>

Let's delete our our NGINX container first:

root@albuquerque-docker:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
c97d363a1cc2        nginx               "nginx -g 'daemon of…"   8 minutes ago       Up 8 minutes        80/tcp              amazing_lalande
root@albuquerque-docker:~# docker rm c97d363a1cc2
Error response from daemon: You cannot remove a running container c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c. Stop the container before attempting removal or force remove
root@albuquerque-docker:~# docker stop c97d363a1cc2
c97d363a1cc2
root@albuquerque-docker:~# docker rm c97d363a1cc2
c97d363a1cc2
root@albuquerque-docker:~# docker run -d -p 80:80 nginx
a8b0454bae36e52f3bdafe4d21eea2f257895c9ea7ca93542b760d7ef89bdd7f
root@albuquerque-docker:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
a8b0454bae36        nginx               "nginx -g 'daemon of…"   7 seconds ago       Up 5 seconds        0.0.0.0:80->80/tcp   thirsty_poitras

Now, let me reach it from an external host:

rodrigo@ubuntu:~$ curl http://172.16.199.57
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

Uploading my App to online Registry and Retrieving it

You can also upload your containerised application to an online registry such as dockerhub with docker push command.

0151T000003d7ePQAQ.png

We first need to create an account dockerhub and then a repository:

0151T000003d7eQQAQ.png

Because my username is digofarias, my hello-world-rodrigo image will actually have to be named locally as digofarias/hello-world-rodrigo.

Let's list our images:

root@albuquerque-docker:~# docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
ubuntu                         latest              94e814e2efa8        8 minutes ago       88.9MB
hello-world-rodrigo            latest              e6e4f99ed9f3        8 minutes ago       155MB

If I upload the image this way, it won't work so I need to rename it to digofarias/hello-world-rodrigo like this:

root@albuquerque-docker:~# docker tag hello-world-rodrigo:latest digofarias/hello-world-rodrigo:latest
root@albuquerque-docker:~# docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
hello-world-rodrigo                latest              e6e4f99ed9f3        9 minutes ago       155MB
digofarias/hello-world-rodrigo     latest              e6e4f99ed9f3        9 minutes ago       155MB

We can now login to our newly created account:

root@albuquerque-docker:~# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: digofarias
Password: **********
WARNING! Your password will be stored unencrypted in /home/rodrigo/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

Lastly, we push our code to DockerHub:

root@albuquerque-docker:~# docker push digofarias/hello-world-rodrigo
The push refers to repository [docker.io/digofarias/hello-world-rodrigo]
b4cfcee25345: Pushed
51bedea20e25: Pushed
b57c79f4a9f3: Pushed
d60e01b37e74: Pushed
e45cfbc98a50: Pushed
762d8e1a6054: Pushed
latest: digest: sha256:b69a5fd119c8e9171665231a0c1b40ebe98fd79457ede93f45d63ec1b17e60b8 size: 1569

If you go to any other machine connected to the Internet with Docker installed you can run my hello-world app:

root@albuquerque-docker:~# docker run digofarias/hello-world-rodrigo
hello, Rodrigo!

You don't need to worry about dependencies or anything else. If it worked properly in your machine, it should also work anywhere else as the environment inside the container should be the same.

In real world, you'd probably be uploading just a component of your code and your real application could be comprised of lots of containers that can potentially communicate with each other via an API.

How do we manage multiple container images?

Remember that in the real-world we might need to create multiple components (each inside of its container) and we'll have an ecosystem of containers that eventually make up our application or service.

As I said in part 1, in order to manage this ecosystem we typically use a container orchestrator.

Currently there are a couple of them like Docker swarm but Kubernetes is the most popular one.

Kubernetes is a topic for a whole new article or many articles but you typically declare the container images to a Kubernetes deployment file and it downloads, installs, run and monitors the whole ecosystem (i.e. your application) for you. 

Just remember that a container is typically just one component of your application that communicates with other components/containers via an API.

Version history
Last update:
‎16-Apr-2019 01:46
Updated by:
Contributors