Software development is a tedious job for application developers that requires input from multiple people at multiple locations. Software environments are not always guaranteed to be identical and even multiple updates and versions of the same language could cause a problem specially in the case of legacy applications. This is where containers come into the picture.
They help software run smoothly when it’s moved from one environment to another. This could be from your local machine to a cloud, Python code from a Mac dev station to an Ubuntu production server, or even across operating systems (say from openSuse to Red Hat). In a lot of ways, the container allows you to decouple the application from its host machine, this saves a lot of time during development workflow as compared to monolithic applications.
When to use containers over virtual machines?
By eliminating the requirement of a virtual machine for each application, containers reduce the number of machines required.
Container adoption has been high with startups and with huge enterprises, but enterprises in the middle still have some concerns about containerisation. This is changing though, and container adoption is expected to continue on its upward trajectory . Their potential was obvious and made easily integrable through Docker.
- For starters, virtual machines require their own instance of an operating system. This may require a huge amount of space. A container, on the other hand may be only a few megabytes in size. For the same server, far more containers than virtual machines can be hosted.
- Since an operating system needs to be booted, virtual machines take some time to run the application they host. Containers, however, can start applications instantly.
- By eliminating the requirement of a virtual machine for each application, containers reduce the number of machines required. This lower footprint consequently results in lower expenses for the organisation on its overall cloud infrastructure.
Containers do not require your entire application to reside in a single system. They can be split into modules or microservices, which simplifies the whole process. Any changes that need to be made can be done on the individual module without modifying the entire application. The microservices can then be called through APIs wherever they are required.
The flexibility associated with containers have heavily benefited smaller firms:
- They now have a secure way of scaling up their services without the hassle of having everything in the same system.
- The decoupled approach removes the stress and money involved in changing the code to become platform consistent.
- Businesses also need not worry about the security of their application as far as the containers are concerned. Any security breaches and spillages are ‘contained’.
Building a container
Containers are a fairly recent concept. Building and operating containers can always be made easier with a given set of practices. This creates a faster runtime with stronger images.
- Avoid using multiple applications in a single container. Different lifecycles or different states of the application create an overall confusion. A container may be running even though one application in it has crashed, because another application is running. Limiting one application to one container keeps your container healthy.
- Your container should be stateless. By storing your data outside the container, the container can be shut down or destroyed without loss of data. If you need a new container, that too can be easily connected to the datastore. This way, the same container can also be used across different environments and files. At the same time, it is important to remember that the stateless nature of a container makes running a database inside one an avoidable scenario. Kubernetes, however, makes it possible to run databases inside containers and not lose one’s data.
Stateless vs Stateful Apps
- Try to simplify the image as much as possible. Building a small image comes with a number of benefits – nodes and APIs can upload and download the container image much quicker if your image is small. A stateless and specific image will be small in size.
- Choose your base image well. When migrating to containers for the first time, it is important that you find a reliable base image. This is not a very difficult task today, thanks to the docker registry which lists a number of possible images. Check the tags provided with the image, the latest tag may bring a change to the application. Choosing the base image is also crucial since it determines the libraries/prerequisites you may need to install later, and the final size of your finished image. Eg – using Alpine Linux images as a base may be advantageous since those images are small – but it also means other components may be missing, unlike in Ubuntu container images
- Make maximum use of the build cache. Containers store layers of your cache, so that it can be later used to accelerate the build of your application. In order to optimize the cache, add add your source code towards the latter part of your Dockerfile. This way the dependencies and base image will not be rebuilt every time the file is run, due to the cached data.This helps save space and time. You may also want to look at multi-stage builds – a new Docker feature that allows you to build your code in a single image and transfer it to other images for later usage. This is helpful when you’re coding in languages such as Golang or Rust, where the compilation environment may be large, while the actual environment needed to run the compiled binaries does not require dependencies.
Why convert VM to Container?
With containers, deployment has become a whole lot easier. Changes propagate to the environment easily and deployment can be executed more frequently across multiple levels. New features are tested regularly and sometimes even made available to the public in the form of beta-tests on a public cloud. Enterprises have now accepted and incorporated containers into their systems with relatively open arms and a lot of credit can be assigned to Docker and kubernetes. As companies of all scales are finding it easy to shift to containers, it does not necessarily mean containers are completely replacing virtual machines. In fact, virtual machines and containers can co-exist in harmony. VMs are a place for docker containers to run. The concept of a hybrid container architecture, or virtualisation through containers are being explored currently in a number of research institutions. AWS Firecracker, Hypercontainer, and Kata are some popularly emerging options along with other open source projects.