- 1 VMware's 'Friendship Strategy' Making Strides as It Launches vSphere 6.5
- 2 Kubernetes 1.4 Aims to Address Complexity Concerns
- 3 Microsoft Bringing Security via Virtualization to Web Browsing
- 4 ZeroStack Serving Up Killer Enterprise Features for OpenStack Customers
- 5 ClusterGX Brings Container Management to Big Data Apps
Securing Containers without the Need for Virtualization Technology
Virtual machines are often used in environments running containers for security reasons. That's because applications running in containers on virtual machines are better isolated than applications running in containers that aren't in virtual machines. Simple.
- Navigating Your IT Career
- Exploring the Private Cloud for Your Organization
- IT Manager's Guide to Social Networking
But it's obvious that if you are running containers in virtual machines you won't be able to run as many containers on a given host. The virtual machines will consume resources that could otherwise be used to run more containers.
Going down the virtual machine route to application isolation also has performance implications, and may applications — especially I/O intensive ones — will run slower in a container in a virtual machine than one running just in a container.
And that's where Joyent's Triton Elastic Container public cloud and private cloud platforms comes in. The San Francisco-based company's platforms can be used to run containers on bare metal servers — i.e. without using virtualization technology — with a high degree of application isolation, according to Bryan Cantrill, Joyent's CTO.
"We can provide the same level of containment for a container as a process running in an operating system," he says.
Triton Elastic Container Infrastructure is the company's private cloud platform, while Triton Elastic Container Service is its public cloud containers-as-a-service (CaaS) offering.
The secret to Triton's abilities is that it's not Linux-based. In fact, it runs on SmartOS, which is a derivative of Illumos, an open source fork of Sun's OpenSolaris. The isolation is provided by container technology derived from Solaris called Zones.
(Zones were originally called Containers in Solaris, but the Zone name was phased in later to avoid confusion with other container systems.)
What's important is that while LXC — Linux Containers, the technology that Docker was originally developed around — was never designed to be multi-tenant, and therefore provide isolation, Zones were.
That's all well and good except for one thing: you can't run Linux binaries in Zones. "So what we had to do is resurrect an old Sun technology so that Linux binaries could execute at full speed on CoreOS," says Cantrill.
The technology, called lx-branded Zones, provides a complete Linux userspace and support for the execution of Linux applications in a Zone.
"We didn't know if it was possible or not, but we found we could make Linux binaries totally robust (in Zones) on CoreOS," he says. "We now have containers running directly on the hardware in a single OS."
What Triton Does
In a nutshell, what Triton does is turn data center compute resources — a rack of servers, in other words — into a single virtual Docker host that provides secure isolation between containers running on it.
Triton comes with an orchestration and management system called SmartDataCenter that provides administrators with an admin portal from where they can see the physical servers that are being controlled, and how containers are allocated on each one. But from a user perspective, the whole setup just appears as a — or perhaps that should be "the" — Docker host, and almost any number of containers can be run on it.
"The biggest challenge has been virtualizing the Docker host," says Cantrill. "Where we find incompatibilities is where people rely on the insecurities of the Linux containment system to hack around things that don't work in Docker. They hack in to the Docker host, but you can't do that in our model."
What kind of performance gains can be achieved running containers on bare metal in this way rather than in a VM? That depends on what the application in the container is doing, says Cantrill. If it is pure data processing then there will be little or no performance gain. But if there is a lot of network or disk I/O going on, then it's a different story altogether, he says.
"If you are running something like Postgres, for example, then we see a factor of ten multiplier in performance, so it could run an order of magnitude faster," says Cantrill.
In terms of multi-tenancy, or cramming more containers on available hardware, Cantrill says that running containers on bare metal is much more resource efficient than running them in VMs. "That means the level of tenancy on Triton is much higher [than when containers are run in VMs]," he says. "On our public service, the highest density service we have is 400 128Mb containers running at bare metal speed. In fact, running 1000s on a single compute node is possible."
All in all, it's impressive sounding stuff, even though the platform is relatively new. But if the isolation it provides is really comparable to VM isolation, and the management system proves powerful enough, then this will likely be of interest to a lot of companies that operate their own private cloud.
Paul Rubens is a technology journalist and contributor to ServerWatch, EnterpriseNetworkingPlanet and EnterpriseMobileToday. He has also covered technology for international newspapers and magazines including The Economist and The Financial Times since 1991.
Read more on "Data Center Management Spotlight" »