Containers — A Paradigm Shift

Lakmal Warusawithana
4 min readFeb 25, 2017
image source : https://s-media-cache-ak0.pinimg.com/736x/45/c2/84/45c284c76172d649deb1e03c1e78fc1a.jpg

Computer industry is very dynamic. Things are changing drastically. If we looked at the last 30 years it is difficult to imagine how fast it has changed.

Within the last 4 years’ containers become very popular in the computer industry. Everyone want to use container for their work.

Is container going to be a fad? I think it is not. Containers are here to stay. It has excellent balance between security/isolation, performance and resource utilization. Containers will end the dominance of virtualization (hypervisor based) within the enterprise for sure.

This is of course not new — reportedly Google has been using containers for a decade or more. The underlying OS technology is not new either with roots going back to Solaris Jails and BSD chroot.

What is a container native? Where we are heading with? Is it really a paradigm shift?

Answer to above questions let’s have a look at different characteristics of the containers.

No more config-over-code

Last 20 years has been advocating config-over-code. With container that’s mostly going. Apps can redeploy a bit of code as much as you can redeploy config.

Why does this matter? Configuration increases the complexity of the code. It’s one more thing for someone working on the code to understand. It requires them to understand your configuration format, and how the code translates that configuration into data and acts on that data. A complex configuration file is essentially another piece of code.

Microservices and the birth of the “single function server”

Microservices advocates that a service should perform exactly one function. The objective is to make it easier to write and consume shared functionality by providing strong isolation and clear interfaces for that function.

Single functions lead no deployment concept. No need to have “context”. Then no need to have complex application servers with these junks.

Container overhead is nothing compared to VM. Single function in single container is now practical than theorizing. Microservices can start < 1–2 seconds.

No request dispatching

Containers and its service discovery are capable of request dispatching without passing complexity into application servers. Simply it is using lower level network infrastructure such as IP, ports and DNS. Layer 4 load-balancing is in-built and natively support by almost all the container orchestration systems.

No garbage collection

Early days when we are writing a C program we had used alloc and malloc very carefully otherwise it will lead to runtime memory leaks. JAVA introduced garbage collection and its helped developers to reduce memory leaks. Does things different with the containers?

When we are running an application inside a container we can set maximum memory and CPU allowed to use. When it hit the max, container will be destroyed and a fresh container will start by auto healing system. What does it mean? Can we pass memory management to container level? I will open it you to think. :)

Immutability

Immutable server/container is a devOps practice that helps in the operation cycles. What does immutable mean? Never upgrade your container. Never update your code. Instead create a new container and throw away the old one! Rollback is always by bring back the old container. No incremental updates.

DevOps and CICD

In normal app development only application artifacts are bundled to application delivery pack (downloadable zip. tgz etc). Then apps in later stages in delivery pipeline will have wide range of variation, depending on the runtime and the specific operating environment of the deployment environment. But with container, a developer can be bundled the actual infrastructure (i.e. the base OS, middleware, runtime and the application) in the same container image. Having same environment in development and production helps to end the famous war between developer/QA/Ops in the software development life cycle. Result is significantly increased the velocity.

One container per user?

What if we are running a container per user? No need to have session data to sandbox the user. When the request dispatched, user data will be loaded into the container itself with defined compute resource like CPU, memory etc. If allocated resources are not enough, either we can destroy it and recreate a new container (vertical scaling) with enough resources or add another container (horizontal scaling) to the cluster. Both options are viable because of container uptime is very minimal. We can create a container with the new resources even without notice to browser in web applications.

Non-long running

If we looked at last 20 years or so we wanted to run our servers without downtime forever. I can remember that some of our Linux servers have not rebooted in couple of years. With the containers, it has changed. Container lifetime can be the same as the work that carried out for a given task. This is exactly the serverless architecture concept. No need to have running servers. They will dynamically appear, do the work and exit.

Container technology has impacted all the software industry. It is a disruptive technology.

In WSO2, we have a good understanding of the containers and surrounding technologies. WSO2 Carbon 5 architecture and the Ballerina are natively support containers and will be foundation for all our future product stack. Let’s have a detailed discussion of them with another post.

Note: This was a long waited article and its background goes to discussion I had with Sanjiva Weerawarana , Paul Fremantle, Srinath Perera and Asanka Abeysinghe sometimes back in 2016 :)

--

--