Guides Inauguration Coverage Hints at Potential for Cloud, Virtualization

Inauguration Coverage Hints at Potential for Cloud, Virtualization




Whatever your political views, yesterday was historic. While Virtually Speaking is obviously not about the political, it would be remiss to underestimate the technological implications, as they extend far beyond the event itself and politics in general.

Virtually Speaking: Although the technological aspects of yesterday’s history making did not take center stage, they reveal a shift from passive viewing of major events to participation. These same elements are positioning virtualization and cloud computing to reap the benefits in more mundane times.

Millions, perhaps billions, of people viewed yesterday’s inauguration festivities, both live in Washington, D.C. and online throughout the world. Oh, yeah, and you could also view the inauguration on TV (which I humbly admit to having ended up doing with colleagues when the feed I was watching in the office choked on its own bandwidth).

While an early roundup of traffic statistics reveal it was not the busiest day ever for
the web, it was up there. Virtually every media outlet from the New York Times to the Presidential Inaugural Committee’s website delivered live streaming of the swearing in and speech as well as much of the festivities of the day. In most cases, they not only broadcast the event itself but also attempted to place their own unique value-add, with the general aim of making viewers into participants.

A CNN/Facebook partnership, in which Cisco played a major (albeit underreported) role was perhaps the most ambitious and also offers the most food for thought about the future. The news network struck a partnership with social-networking company to stream inauguration coverage on its site. Cisco provided the all-important, but far less glamorous, pipe. The video feed appeared on the left side of a split-screen window, and Facebook status updates could be posted on the right. This resulted in discussions among friends, and theoretically the feeling of being a part of the action while thousands of miles away from the freezing cold mall.

Based on the technology coverage of yesterday, it appears that there were no major bandwidth hiccups or a sense that the web was a secondary medium as the day unfolded.

And with this fresh in everyone’s mind, the timing on Citrix’s latest announcement couldn’t be better.

On Wednesday, the company highlighted the next phase of its vision and roadmap. According to a publicly released statement, it is poised to “radically change the economics of desktop computing.” The company has partnered with Intel to develop virtualization solutions that optimize the delivery of applications and desktops for Intel Core2 and Centrino 2 processor-based devices.

When your data resides elsewhere, performance, and thus the ability to get to your data, is even more critical than if it’s stored locally. It is not surprising therefore that the Citrix/Intel solution will center around a bare-metal Xen-based desktop hypervisor optimized for Intel Virtualization technology and other features of Intel vPro technology.

The solution will be incorporated into an upcoming solution from Citrix currently code-named “Project Independence.” The actual rollout is planned for the second half of 2009.

What sets this solution apart from more traditional server-based desktop virtualization technologies is that it caches and executes desktop and application software directly on the client, eliminating potential latency issues and allowing for offline usage. The advantages of desktop virtualization, such as easier management and security, remain.

In the grand scheme of Citrix’s vision, enterprises and the end users in them stand to benefit the most from the partnership. However, the target market for the virtualization-optimized processor is the OEMs, Calvin Hsu, director of product marketing for desktop delivery group, told ServerWatch. The software will ship preinstalled. Hsu noted, however, that the retrofit market will come into play as well.

Dell is one OEM already on board. It has to certify the product to run on its computing platforms and has provided engineering support to aid in the design and testing of the new technology.

Persuading companies to spend money to save money is always a tough sell, and it is an even tougher sell in difficult economic times. Add unproven or semi-proven technology to the mix, and it’s near impossible. But if the costs are cut and proof-points (like yesterday) are present, the risk seems mitigated and the reward more within reach.

Amy Newman is the managing editor of ServerWatch. She has been following the virtualization space since 2001.

Latest Posts

Installing and Activating Hyper-V Linux Integration Services

Editor's Note: Updated to reflect changes with the Hyper-V Linux Integration Services 4.3 release. Microsoft developers have designed components that help in improving the performance of...

What is a Hypervisor Server?

At its most basic, a hypervisor is the “manager” of a software-hardware stack. The term “hypervisor” derives from the word “supervisor.” What is a Hypervisor? When...

HPE SimpliVity 380 Server Review

The HPE SimpliVity 380 Server was designed to deliver the high performance required by enterprise data centers in a simplified package. One of the...

What is Server Virtualization?

Server virtualization is essential for the efficient operation of any datacenter. But what exactly is server virtualization and is it right for your business?...

Using Netsh Commands for Wi-Fi Management in Windows 10

In this Server Tutorial, we’ll show you how to use Netsh Commands for Wi-FI management in Windows 10. Some basic networking settings and functionality...

Related Stories