It goes without saying that virtualization brings many benefits to enterprises. Fewer, more efficiently running boxes are needed, which results in lower cooling requirement and less power consumption.
In other words, a greener data center. Such was the message IBM delivered at the launch of its “Project Big Green” in New York City yesterday. Big Blue has reallocated $1 billion per year across its business with the goal of increasing the level of energy efficiency in IT.
There was little that was actually new in the initiative, save several services offerings. Its mere creation, however, speaks volumes about where IBM sees the data center headed and the role virtualization will play to get it there.
Under Project Big Green, resources will be positioned into a five-step approach: Diagnose, Build, Virtualize, Manage and Cool.
Each step in the holistic plan is designed to lead to the next, so by the time an enterprise arrives at the virtualization stage, it has already evaluated the facilities it has in place and formulated a plan to either build or update the facility.
It’s no mystery that IBM has a 35-year legacy of virtualization, a legacy that began with mainframes and found its way through its Unix-based System p servers, and into its x86-based servers and blades. IBM was also an early champion of VMware.
Performance metrics from IBM reveal the typical mainframe is 80 percent productive, the typical Unix box is 20 percent productive and the typical x86 server is 8 percent product.
Bill Zeitler, senior vice president and group executive of IBM Systems and Technology Group, attributes the bulk of energy issues to “the build out of the Internet [which] resulted in an explosion of servers.”
In other words, scale out, which was a mantra of sorts at the turn of this century.
In addition to servers, storage devices have also seen explosive growth in the past decade — a factor of 69 to be exact.
“Improving asset utilization through virtualization holds the most immediate near-term return on investment,” Zeitler said.
IBM offers a host of solutions to address specific energy issues: PowerExecutive (which helps meter, control and cap power usage), Heat eXchanger (which reduces heat by as much as 50 percent to 60 percent), stored cooling solutions, application-specific engines (servers designed from chip to box for particular applications) and BladeCenter.
All work well with virtualization, application-specific engines and BladeCenter, in particular, Zeitler said. IBM currently offers application-specific engines tailored for Java, XML intensive applications, data management and Linux, Richard Lechner, vice president IT Optimization, told ServerWatch.
Enterprises that run all data management apps on a box tailored for such usage and then virtualized will reap greater benefits than if they virtualized a general-purpose server and ran similar apps, whether concentrated or not, he said.
IBM hardly has a lock on server room energy efficiency. It is a hot topic of conversation among the OEMs these days, and Sun Microsystems has been focusing on it for several years now. Big Blue is, however, unique in the end-to-end proposition it is offering.
RapidMind Sprints to 2.0
Obviously, IBM is also not the only vendor looking at how to get the most out of your hardware and virtual machines. RapidMind is tackling this from a completely different angle — the developer community.
On Monday, RapidMind released version 2.0 of RapidMind Developer Platform. The solution is a platform for software developers looking to exploit the capabilities multicore chips offer.
Current applications aren’t written to take advantage of the performance increases multicore offers. Even virtualization, which is as compatible with multicore as peanut butter is to jelly, isn’t able to reap the benefits. “Virtualization allows enterprises to take advantage of multiple cores, but each application gets only a single core, so, in effect, you have a host of single-core processors,” Ray DePaul, chief executive officer of RapidMind, told ServerWatch.
RapidMind alleviates this pain point by enabling developers to program in C++ using their compilers and tools and then “parallelizing,” or distributing, data across many cores to boost the hardware performance. Applications created on RapidMind are hardware-independent and can scale to future multicore chips.
The new version adds support for developers writing software for IBM’s Cell Broadband Engine and graphic processor units (GPUs).
By simply porting a small subset of an application to a GPU, application performance increases dramatically, DePaul said. The bulk of the application continues to run on the Intel or AMD processor it was originally written for, so aside from the better performance, the end user would be unaware.
RapidMind supports Windows XP Pro Vista, Windows Vista and certain Red Hat, Fedora, Ubuntu and Yellow Dog flavors of Linux for the ATI x1X00, Nvidia Quadro and GeForce CPUs, and Cell/B.E. hardware.
Version 2.0 is available now, with a free developer edition available for download.
VMware Rolls Back Time
Meanwhile, VMware went back to it roots with the release of Workstation 6. Two years in the making, Workstation 6, debuts technologies for the entire VMware family, as is typical of Workstation releases, James Phillips, senior director of Software Lifecycle Solutions at VMware, told ServerWatch.
Although designed to be run on the desktop, VMware Workstation is not a virtual desktop in the now-common, thin-client sense. Rather, it’s a desktop provisioned into multiple virtual machines, similar to its server siblings. Workstation is frequently used in preproduction testing, rehosting of legacy applications, rapid provisioning and resetting of multitier environments, and accelerated software development and testing.
An August 2006 survey conducted by VMware revealed developers make up 20 percent to 30 percent of Workstation’s user base, while IT admins account for about 75 percent, Phillips said.
The two new features in Workstation 6 that VMware is shouting loudest about are continuous virtual machine record and replay, and cross-platform paravirtualization.
Record and replay is designed to enable users to record the execution of a virtual machine, including inputs, outputs and decisions made along the way.
Later, the user can return to the start of the recording and replay the execution, guaranteeing the virtual machine will perform exactly the same operations every time. Thus, bugs can be reproduced and resolved.
The other key enhancement is cross-platform paravirtualization with the open-interface standard paravirt-ops. Workstation 6 supports cross-platform paravirtualization with the open-interface standard “paravirt-ops.” VMware is laying claim to being the first to offer support for paravirt-ops.
Paravirtualized Linux operating systems are modified operating systems specifically optimized to run in a virtual environment. They enable transparent paravirtualization, thus enabling allows users to run the same Linux kernel in paravirtualized mode on a hypervisor as well as on native hardware.
The Linux kernel has supported paravirt-ops since version 2.6.1, Phillips said. IBM, Red Hat, VMware and XenSource contributed to the Linux community’s creation and incorporation of them.
Other new features in VMware Workstation are Windows Vista support, USB 2.0 support (which enables users to take advantage of high-performance peripherals, such as Apple iPods and fast storage devices), ACE authoring capabilities, integrated Physical-to-Virtual (aka P2V) functionality and virtual debugger, background virtual machine execution, and automated API capabilities.
VMware Workstation 6 is available for immediate purchase via download. It carries a list price of $189.
Symantec Virtual Support, More Than Words
Finally, what’s a week in the virtual realm without news on the management tools front?
This week, the player was Symantec, which Tuesday announced planned enhancements for its Veritas Server Foundation, a product family designed to improve application availability through automation, control of configuration changes and server utilization optimization.
Server Foundation consists of Veritas Cluster Server, Veritas Configuration Manager, Veritas Provisioning Manager, and Veritas Application Director.
Although not specifically designed for virtual environments, Veritas Server Foundations takes virtualized environments into account, particularly with Veritas Application Director, which provides runtime control of applications and virtual machines.
Server Foundation competes with products from Realops, Opalis and Opsware, Rob Greer, director, product marketing for Server Foundation, told ServerWatch.
New features will include workflow and configuration control capabilities to help minimize downtime, automate and standardize execution of critical processes, and provide increased visibility across the data center.
What Symantec claims sets Server Foundation apart is that it enables both end-user orchestration and infrastructure automation with automation capabilities at both the server and storage layers through integration with Veritas Command Central Storage.
This round of enhancements to Server Foundation is not expected to ship until July 9, Greer said. Although pricing is not public at this time, Config 6.0 and Provisioning 5.0 will be available for purchase as a bundle with a “strong price advantage to buy them together,” Greer said.
Amy Newman is the managing editor of ServerWatch. She has been covering the virtualization space since 2001.