DevOps Explained to the LaymanContainers: plug-and-play code in DevOps world - part 2
In my previous article I mentioned that breaking down your code into small plug-and-play components is a best practice in a DevOps solution and that is called Microservices Architecture.
It makes your application less prone to errors, scalable, resilient and it's very easy to replace one component for another one or for a different version.
In the real world, Microservices Architecture became very popular because of Containers.
As a result, we're going to talk about containers.
For the uninitiated, please know that currently the most popular container platform is called Docker and you'll learn more about it in part 2 (another article).
In part 1 (this article), I'm going to very briefly explain the following:
Before containers we had mostly software that were deployed in a monolithic way, i.e. in a larger chunk of code.
If you had to deploy changes you'd have to repackage all the components:
Changes can be due to a bug found later on, a new feature, etc.
The bottom line is that whatever change is needed it might not be straightforward.
If you need more memory, CPU or scale a specific component of your application you can probably keep adding more memory, CPU, etc, to your box/VM
The other option would be to add a BIG-IP and scale your monolithic application:
However, this option may require changes in your code that is not always possible from a developer's point of view.
Also, what if you don't need to scale the whole application but just a single component?
I'd say this is more of an advantage of containers rather than a disadvantage of monolithic environment.
Without containers, chances are that your development environment might be slightly different to staging or deployment environment:
Even if the OS is the same, but maybe libraries are different or OS version is different and unexpected things might happen during testing or deployment phase.
For the uninitiated, think of containers as a Linux trick to isolate a specific component of your application using the same Linux kernel without the need to use Virtual Machines.
It contains the component of your application you've just created and all dependencies and libraries it needs to run.
An application (in containers world) is typically comprised of one or (more frequently the case) many containers, and sometimes a LOT of containers.
The most popular container platform at the moment is called Docker.
Let's revisit how containers solve the problems pointed out earlier but keep this in mind:
Say you're a developer and you corrected a bug in Component 1 (C1) code and you'd like to replace C1 for a newer version:
All you need to do is to repackage and restart C1 only and leave the other components untouched:
You can replace all C1 components or just a few of them in case you'd like to check how new code behaves.
You can then configure the API Gateway or Load balancer to round robin only a handful of requests to newer version of the code, for example, until you're super confident.
What if we need to scale Component 4 (C4) in the below picture?
Just add one or more components when you need it. In the example below, we've added another C4 instance to Server 3's Guest OS:
If you no longer need 3 containers, you can remove the additional container. Scalability is literally plug-and-play!
The magic in the containerisation process is that you not only pack your application. You pack its environment as well:
This means that the environment (component-wise) you've been creating and testing your component should be the same you deploy to production!
This is a tremendous advantage over monolithic applications as it's less prone to errors due to differences in OS version, libraries, etc.
In a DevOps solution, we use an agile methodology developers are typically encouraged to create small chunks of code anyway like this:
They then merge them to main application at least once a day or typically many times a day.
If the code is isolated into a single container, it is easier to troubleshoot or to find a bug.
It also encourages the developers themselves to think (and to see) the application broken down into organised chunks of functional components.
Maintenance is also cleaner and you can focus on a particular service/container for example.
This is so much cleaner and so plug-and-play!
That's right. You can create hundreds or thousands of components and pack each of them into a container.
Are you going to manually deploy all these components?
How to do deploy all these components that make up your application and still efficiently utilise the available resources in an efficient manner?
What if one container fails out of the blue?
You probably need some form of health check to maintain a minimum number of containers running too, right?
What if you suddenly need more resources and your containers are not enough?
Are you going to manually add containers?
That's where Kubernetes comes in! The most popular container orchestration solution.
The next article (part 2) I will introduce Docker in a more technical perspective.