Read more on "Data Center Management Spotlight" »

Open Server Summit Shines on Software-Defined Architectures

By Paul Ferrill (Send Email)
Posted December 29, 2014

The world of a completely software-defined data center (SDDC) is not that far off and, in fact, you can see most of the major components on display today from many vendors.

The Open Server Summit 2014 was held in Santa Clara, CA last month and had most, if not all, of the key SDDC technologies on display. Open Server SummitYou can see a list of the topics on the OSS2014 website and even download a copy of the proceedings if you want to see for yourself.

Jim Pinkerton, Partner Architect Lead for Microsoft, was deeply involved in the development of SMB 3.0, RDMA and other core server technologies, and he gave one of the keynote addresses focusing on their recently announced Cloud Platform System(CPS), which runs on Dell hardware.

This system is massive and brings some of the scale that Microsoft uses in their Azure data centers to a deployable, on-premises solution. The sheer scale of this product boggles the mind with numbers quoted for in-rack network bandwidth over 1300 GB and inter-rack speeds of 560 GB.

Software-Defined Storage

One of the session tracks at OSS2014 focused on Software-Defined Storage (SDS) and related technologies. The session chair was SW Worth of Microsoft, who is on the board of directors for the Storage Networking Industry Association (SNIA). He presented a paper on the use of erasure coding as found in Microsoft's Storage Spaces feature in Windows Server 2012 R2.

OpenStack Ceph got its fair share of coverage as well, including a presentation by Fujitsu on how they incorporated Ceph into their CD10000 product. This beast of a system can scale up to a whopping 56 PB of storage, if that's something you might need.

One of the key technology decisions made for this product was to move from replication to erasure coding. This is a relatively new feature in Ceph but one they chose to adopt in order to maximize the effective storage delivered in the box.

New Server Architecture Solutions

One panel discussion included representatives from a number of companies that are bringing alternative CPU architectures to the server market. ARM was represented well with two chip companies (AMD and Applied Micro) plus one speaker from ARM.

IBM discussed its latest Power architecture, and a Microsoft speaker discussed its contribution to the Open Compute standard plus work on using FPGA devices to speed up specific computational workloads.

With the introduction of 64-bit ARM processors plus support from Canonical and its Ubuntu operating system, there is finally a viable 64-bit alternative to x86 in the data center.

Hewlett Packard is one of the first big server vendors to bring this to market with its recently announced M400 cartridge for the Moonshot server. This cartridge is based on the X-Gene Server on a Chip (SoC) from Applied Micro Circuits Corporation and ships with a version of Ubuntu 14.04 LTS.

Software-Defined Networking

The concept of a virtualized network is not new but has found its way into the product offerings of Microsoft, VMware and the OpenStack project. OpenFlow is a switch specification maintained by the Open Networking Foundation and covers both the components and basic functionality of the hardware plus the OpenFlow management protocol. OpenStack uses both OpenFlow and its own Quantum project as the key components of its Software-Defined Networking.

Microsoft and VMware have chosen two different technologies to implement in their software-defined networks. In Hyper-V, Microsoft chose NVGRE (Network Virtualization using Generic Routing Encapsulation) as the way of implementing its virtualized switch.

VMware took a different approach through its acquisition of Nicira and the VXLAN (Virtual eXtensible Local Area Network) technology documented in RFC 7348. While the two technologies have similarities, they also have some significant differences, not the least of which is the fact that GRE does not take advantage of a standard transport protocol (TCP/UDP).

The real key to taking advantage of Software-Defined Networking, as it is with all "Software-Defined" technologies, is in the management layer. Configuring and reconfiguring the fundamental characteristics of a network on the fly is not something you would want to do manually. The other key is the concept of putting pieces together to build out a system able to scale and meet the demands of high volumes of data and bandwidth.

Final Thoughts

Scaling today's systems to meet the challenges that are arising from mountains of data created by the Internet of Things will require leaning heavily on software and new technology to make it happen.

While the "Software-Defined" movement might seem like a marketing-heavy phrase, it really does present a viable approach to this problem. Open source definitely has a seat at the table along with traditional virtualization players like Microsoft and VMware.

Server vendors continue to search for ways to make their products both more efficient and more powerful. New 64-bit CPUs from ARM offer an opportunity to meet that challenge, enabling the production of servers requiring significantly less power to operate than what runs in the data centers of today.

Paul Ferrill, based in Chelsea, Alabama, has been writing about computers and software for almost 20 years. He has programmed in more languages than he cares to count, but now leans toward Visual Basic and C#.

Follow ServerWatch on Twitter and on Facebook

Page 1 of 1

Read more on "Data Center Management Spotlight" »

Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date