- 1 Taking Stock of the State of the Server Virtualization Market
- 2 Nirvanix Shut-Down Sends Shockwaves through the Cloud Services Industry
- 3 VMware Making Moves to Stay Ahead of Microsoft in Server Virtualization
- 4 Microsoft Looking to Lure Customers Away from VMware
- 5 Red Hat Enterprise Linux 6.5 Brings the Goods, Softly but Surely
- 1 Hyper-V 2012 R2: Pros and Cons of Generation 1 vs. Generation 2 VMs
- 2 Harnessing the Power of Hyper-V Network Virtual Switches
- 3 Working with SSH and Secure FTP Servers in Windows
- 4 Discover Windows 8's Hidden Server Features
- 5 Server Virtualization Customer Reviews: VMware, Hyper-V, XenServer and More
New Trends in the World of Servers and Virtualization
CIOs have the tricky task of delivering the IT services their organizations require within the allocated budget. Over time that inevitably involves having to deliver more with less.
That's one of the reasons server virtualization has been so successful — it allows more virtual servers to be produced from a single physical one, and it ensures the physical resources that are available are utilized more effectively and efficiently.
But what about the physical servers themselves? Can organizations get more bang for their buck there as well? The good news is that the answer is almost certainly yes, and the solution is already in use in very large organizations like cloud providers.
The Open Compute Project to the (Budgetary) Rescue
The solution that's increasingly being adopted by these large-scale operators is to use cheap standardized servers based on the Open Compute Project (OCP) specifications, made by lesser-known manufacturers such as QuantaQCT and Wiwynn, instead of established vendors like HP or Dell. (QCT is owned by Quanta, a Taiwanese company that makes servers for Dell.)
These servers aim to avoid the "gratuitous differentiation" — as those involved with the OCP are often fond of putting it — between different server vendors' products.
"Buying off-the-shelf servers is not economical as they include features that most organizations don't need," says Frank Frankovsky, Facebook's former vice president of hardware design and supply chain optimization.
"That leads to extra costs and wasted electricity, and there are issues like it not being possible to manage HP boxes in the same way as Dell boxes." Proprietary management can wreak havoc in environments such as Facebook's, he added.
As is the way with many of these things, the technology that's first adopted by high-end users eventually filters down to large and then small enterprises, and eventually the SME market as well. Since these OCP-compliant servers are perfectly suited for virtualization as well as conventional computing duties, expect to see many more of them in the near future.
Virtual SAN Technology Takes a Step Forward
Something else that's likely to have a big effect on many organizations in the near future is the availability of effective and low-cost virtual storage area network (SAN) technology.
SANs enable organizations to get more from their virtualized infrastructure (for example by making it easier to move virtual machines from one physical server to another, or to create high availability clusters), but they have tended to be too expensive and complicated for smaller organizations.
The virtual SAN sector got a real boost with the arrival of VMWare's Virtual SAN product back in March. The key difference between most other virtualized storage systems and VMware's is that while others require a storage hypervisor or virtual storage appliance running on top of the server hypervisor, Virtual SAN is built right into VMware's hypervisor.
Being embedded in the kernel of the hypervisor means Virtual SAN is uniquely positioned in the software stack for visibility into applications, Ben Fathi, VMware's CTO, said at the time of the launch. It also has a unique view of the infrastructure beneath it that allows it to optimize the I/O data path to deliver better performance than a virtual appliance or external device, according to Fathi.
But the key benefits of Virtual SAN are that the technology is low cost, enabling smaller companies to benefit from it, and — perhaps more crucially — it is fairly simple to implement and use. "It means non-storage specialists can run storage, and don't need to attend in-depth training to do so," says Mark Peters, senior analyst at Enterprise Strategy Group.
"An IT administrator can easily see the shared Virtual SAN storage resources, compute resources, and, most importantly, the defined storage policies, all from the familiar vSphere management interface," Peters explains.
Virtual Desktop Infrastructure Finally Coming of Age?
Virtual desktop infrastructure (VDI) is a technology that always seems to be up and coming without ever actually arriving, but the desktop as a service (DaaS) market got a twin boost recently thanks to the involvement of industry heavyweights Amazon and VMware.
Amazon's Workspaces virtual computing environment was launched at the end of March (offering Mac and PC desktops as well as mobile versions for iPads and other tablets), while VMware's Horizon DaaS (offering Windows desktops and servers using the newly acquired Desktone technology) was launched a few weeks earlier.
What does this mean? Well, while it's certainly been difficult to predict when — or even if — virtual desktops will begin take off in any significant way, surely if this doesn't get the technology adopted more widely then nothing will.
Microvisor Virtualization Makes Its Move
The final piece of technology that's likely to become more prevalent in the future is also connected with desktops: task-based "microvisor" virtualization. The thinking behind it is this: corporate servers tend to be well defended from intruders, so hackers target end users with spear phishing attacks and malware designed to compromise their machines. They can then be used as platforms to launch attacks against more valuable corporate servers inside the corporate network.
Microvisor-based security products such as Bromium's effectively create a micro virtual machine (microVM) to run each user task (such as opening an individual web page or document), destroying it again when the page or document is closed.
Since each microVM is isolated from the others and from the operating system as a whole, this means that it doesn't matter if a user opens an infected web page or document and the microVM is compromised. Even if the malware installs a rootkit on the microVM, this will also disappear when the microVM is torn down.
Will all of these technologies make it big in the next twelve months or so? Perhaps not, but each is promising enough to be well worth keeping an eye on.
Paul Rubens is a technology journalist and contributor to ServerWatch, EnterpriseNetworkingPlanet and EnterpriseMobileToday. He has also covered technology for international newspapers and magazines including The Economist and The Financial Times since 1991.
Read more on "Server Hardware Spotlight" »