ServersInternal vs. External Cloud

Internal vs. External Cloud

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Paul Rubens

There’s no getting away from it: Cloud computing has many potential benefits, but it has a number of drawbacks as well.

Cloud computing has many potential benefits, but it has a number of drawbacks as well. Building an internal cloud mitigates many of these drawbacks, but it is not without its own limitations.

On the positive side, having applications provided from the cloud offers enterprises of all sizes the possibility the possibility of low-cost compute resources that are almost infinitely scalable. There’s the ability to pay “by the hour” for resources only when they are needed, and for sudden surges in demand for resources to be accommodated very easily. Cloud computing also frees up capital that would otherwise be tied up in hardware and data center bricks and mortar, and it frees up IT staffers who would otherwise be tending to servers so they can work on more productive IT endeavors.

But it’s not all sunshine and roses: The drawbacks revolve around issues about data security and how sensible it is to store it with a third party (assuming regulatory requirements permit it); portability and the possibility of being locked in to one cloud provider; reliability; data logging; speed and the inevitable latency when dealing with servers in the cloud half a continent away; and geo-political worries — where in the world is the cloud data center running your apps, and do you want it there? This last is less of an issue for U.S.-based enterprises, but it is a very real concern for businesses in many other countries.

But for some organizations there’s one real show stopper when it comes to getting the benefits of cloud computing: heterogeneity. Many corporate data centers have many different generations of servers from a variety of vendors running different operating systems on different processors — Windows, AIX, Solaris, Linux, Intel, PowerPC, SPARC and so on. In contrast, most cloud services offer a limited choice of operating systems running on a narrow range of hardware.

This leaves heterogeneous enterprises in a bit of a quandary. It may be possible to offload some applications into the cloud, but the remainder still must be managed and run in-house. If this is the case, then there may be some efficiency benefits, but a smaller data center could actually mean that many economies of scale are lost. What remains is a hodge-podge of different systems that need a great deal of time and many different skill sets to manage, while the easier-to-manage systems would be gone. Efficiency actually goes down.

One solution is to operate a cloud environment in-house: a so-called “internal cloud,” said Steve Oberlin, chief scientist at Cassatt, a San Jose, Calif.-based IT infrastructure management software vendor. “Internal clouds help you to pool your computing resources into a cloud and manage it, applying server resources dynamically on the fly in response to demand,” he says. “What you end up with is higher utilization and efficiency.”

Many organizations have already embarked on virtualizationprograms to boost server utilization rates and reduce power and space requirements in the data center, but Oberlin says an internal cloud goes beyond this. It enables applications that are not suitable for virtualization (such as those that require the resources of an entire server at peak times) to run more efficiently, and it includes virtualized servers anyway: virtualization, in other words, is part of an internal cloud solution, he says.

Which to Choose?

But here’s an important question: Which is better, implementing an internal cloud or using a public cloud? The answer, according to James Staten, a principal analyst at Forrester, is that both have their advantages and disadvantages. “Internal clouds are good because you can follow all of your workflow and security guidelines, and ensure that you are running the right code. The trade-off is that you can’t reach the economies of scale that public cloud providers achieve,” he said.

“On the other hand, if you use a public cloud provider you end up having to do lots of work like license management and adapting to the processes of the public cloud. You have to work with what is on offer. But you do get the benefits of economies of scale.”

So what does it mean to run an internal cloud? At the most basic level, Cassatt’s Active Response software which simply manages power by monitoring usage and applying policies to shut down servers at non-peak times when they are not required, such as on weekends. “You can get pretty dramatic savings on your power bill from this,” said Oberlin. “We see significant energy savings, and an ROI of just nine months.”

But this in itself is not cloud computing in the normal sense of the word. To achieve this, Cassatt uses a database of server images, a set of rules that define the service levels that applications must achieve, and management software that controls the whole setup. Put simply, the management software monitors each application, and when necessary it boots appropriate servers with the correct image over the network to add resources to the applications. At less busy periods, unneeded resources are shut down so they can remain idle or reallocate to other applications that need them. There is still some inefficiency in heterogeneous environments because some hardware can run only some, but not all, server images; however, resources can still be pooled among compatible applications. “In terms of cost savings, a typical data center with low double-digit efficiency can see 40 percent to 50 percent reduction in the number of servers required to provide the same sort of service level, and still have better headroom and agility,” said Oberlin.

In fact enterprises aren’t necessarily restricted to a single data center when it comes to running an internal cloud. Cassatt is soon to release an “Enterprise edition” of its Active Response software, which pools resources across different data centers. And it doesn’t end there. The software will allow an enterprise to extend its resource pool by including resources that come from an external cloud provider as well. “You could have a fixed amount of resources available to you locally or at another data center, and deal with unplanned peaks in demand by using resources from an external cloud provider,” Oberlin said.

This practice of using an external public cloud provider for extra capacity will become increasingly common, Forrester’s Staten said. “At the moment, we are seeing companies using isolated internal clouds, but we certainly think they will end up adopting a “hybrid cloud” or “cloud bursting ” approach.” IBM demonstrated a system earlier this month that enables companies to move application processing from an internal cloud to a public cloud facility, while keeping all data stored within the private cloud. This type of hybrid cloud approach could prove popular with companies that can’t otherwise use a public cloud for security or regulatory reasons.

It’s still the early days for cloud computing, but if efficiency gains in the order of 40 percent to 50 percent are possible, then enterprises unwilling or unable to use a public cloud provider will have to give internal clouds a very long, hard look.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories