VirtualizationVirtualization vs Containerization

Virtualization vs Containerization

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Understanding the differences between virtualization vs. containerization can lead to improved scalability and reduced operational costs for organizations. In this article, we’ll discuss the specific differences between both methods of creating virtual packages as well as the general differences between these two valuable solutions.

Virtually Speaking

Discussions about server virtualization inevitably revolve around VMware, Hyper-V and, to a lesser extent, Xen and KVM as well. But another important inclusion is Docker.

And if we talk Docker we are talking containerization — something a little different than hypervisor-based server virtualization. But encapsulating an application in a container with an operating environment achieves many of the benefits of loading an application onto a virtual machine: both can be dropped onto any suitable physical machine and run without any worries about dependencies.

Docker vs. VMware

Of course, a key practical difference between Docker and VMware is that Docker is a Linux-based system that makes use of LXC, a userspace interface for the Linux kernel containment features.

The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel. As the linuxcontainers.org website puts it: “LXC containers are often considered as something in the middle between a chroot and a full-fledged virtual machine.”

Because Docker is built on top of LXC, it only works in Linux environments and only runs Linux applications. So you can forget about applications that run on Windows or any other operating system that can happily run on a conventional hypervisor.

Another key difference is that rather than being a self-contained system in its own right, a Docker container shares the Linux kernel it uses with the operating system running the host machine. And it also shares the kernel with other containers that are also running on the host machine. Shared parts of the operating system are read-only, while each container has its own mount for writing.

Benefits of Containerization

So what are the benefits of containerization versus full-blown server virtualization? When would you use one rather than the other?

One of the key benefits of containerization is that you can often pack many more containers on a host machine than you can on virtual machines. That stands to reason, because each VM is a self-contained system in its own right, with its own operating system and virtualized hardware and its own unique resources allocated to it. If each VM is 10Gb in size, then 10 VMs will take up 10×10 = 100Gb of resources.

But take a 10Gb container and run ten of them, or even a hundred, and you won’t use anything close to 100Gb of resources. That’s due to all the sharing that’s occurring.

In effect, there’s only one operating system (strictly speaking one kernel) shared by all the containers. And there’s no virtualized hardware — just a little application and operating environment in a container. And that means you can run far more containers on a host than you could possibly run fully-blown virtual machines.

There’s an add-on effect of sharing the kernel and other resources as well, and it’s that containers can start-up in less than a second. That’s not the case with VMs, which require a full virtual system boot and can take substantially longer to get going.

And Mark Shuttleworth, the head of Canonical (sponsor of Ubuntu), reiterates that speed as an added bonus. “Canonical underwrites the key kernel and user space work that makes it possible for you to create containers which behave just like VMs — you have root inside them — even if you’re just an ordinary non-root user on the system. Much faster and lighter than KVM,” he said.

There are other benefits too. You can run Docker containers on AWS and Azure public clouds, for example, and containers are easy to share. That’s particularly useful for test and dev teams — one of the most commonly cited potential beneficiaries of Docker.

Containerization Not Yet a Substitute for Full-Blown Server Virtualization

But the good news for VMware and others is that containerization is not a substitute for full-blown hypervisor-based server virtualization — at least not yet.

That’s because this virtualization world is surrounded by an extremely sophisticated management infrastructure that allows you to store, spin up and run virtual machines, live migrate them between hosts, create high-availability clusters, and more. Products like VMware’s vCenter, Microsoft’s System Center Virtual Machine Manager and other third-party management products have many years of evolution under their belts.

There’s nothing yet comparable for Docker, although Google, Red Hat, CoreOS, IBM and Microsoft are all working on the open-source Kubernetes Docker management system, which has rapidly evolved into a major tool for enterprise IT.

All of this means Docker virtualization technology is definitely worth keeping an eye on for the future.

Paul Rubens is a technology journalist and contributor to ServerWatch, EnterpriseNetworkingPlanet and EnterpriseMobileToday. He has also covered technology for international newspapers and magazines including The Economist and The Financial Times since 1991.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories