ServersHardware Today — Cutting Through the Infiniband Buzz

Hardware Today — Cutting Through the Infiniband Buzz

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Interconnect technology is at a crossroads. Ethernet is standards-based and omnipresent but lags in performance, while Fibre Channel has better performance but isn’t standards based. Further muddying the waters is a newcomer that has been inching slowly toward data center acceptance: Infiniband, a standards-based collaborative effort that provides 10 GB per second performance.

Infiniband is both an I/O architecture and a specification for the transmission of data between processors and I/O devices that has been gradually replacing the PCI bus in high-end servers and PCs. Instead of sending data in parallel, as PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal.

When the Infiniband Trade Association formed five years ago, the consortium set out to rock the networking world and get the combination spec and architecture lead billing over Fibre Channel and Ethernet. We look at where it is today.

Infiniband has been gaining traction with early adopters, as evidenced in the latest Top 500 Supercomputer list unveiled last week. According to an Infiniband Trade Association spokesperson, 11 of the machines were constructed from Infiniband grids, up from six last year.

If the universities and research labs that comprise the early adopters using Infiniband for high-performance computing and clusters are any indication, the technology will continue making a dent in the increasingly Intel grid-centric Top 500 as time elapses. But right now, the technology has its sights on a wider market — the data center.

Infiniband Selling Points

Kevin Deierling, vice president of product marketing for Mellanox, a company that manufactures Infiniband-silicon and related hardware, elaborated on Infiniband’s four major strengths: a standards-based protocol, 10 GB per second performance, Remote Direct Memory Access (RDMA), and transport offload.

Standards: The Infiniband Trade Association a consortium of 225 companies choreographs the open standard. Founded in 1999, an unlikely gaggle of steering members, Agilent, Dell, HP, IBM, InfiniSwitch, Intel, Mellanox, Network Appliance, and Sun Microsystems, drive the organization. More 100 other member companies join them in what Deierling dubs “co-opetition” to develop and promote the Infiniband specification.

Speed: Infiniband’s 10 gigabytes per second performance soundly beats current Fibre Channel’s 4 gigabits per second and Ethernet’s 1 gigabit per second current top speeds.

To avoid confusion or misleading sales, remember that GBps (with a capital “B”) stands for Gigabytes per second; Gbps (with a lower case “b”) stands for gigabits per second, 1/8 of the data of a gigabyte.

Memory: Infiniband-enabled servers use a Host Channel Adapter (HCA) to translate the protocol across a server’s internal PCI-X or PCI-Xpress bus. HCAs feature RDMA, which is sometimes dubbed Kernel Bypass. RDMA is considered perfect for clusters, as it enables servers to know and manipulate the components of each other’s memory via a virtual addressing scheme, without involving the operating system’s kernel.

Transport Offload: RDMA is a close friend to transport offload, which moves data packet routing from the OS to the chip level, saving processing power for other functions. To process a 10 GBps pipe in the OS would require an 80 GHz processor, Deierling estimates.

Bait and Switches

Voltaire, a Massachusetts-based provider of Infiniband switches and other hardware, is one vendor dangling incentives to encourage enterprises to deploy Infiniband.

“[In] a cluster, you have many different fabrics to do different functions,” Voltaire Vice President of Marketing Arun Jain said. Such an architecture requires gateways or routers to translate between disparate network protocols. “We’re the only vendor that has these gateways integrated into our switch chassis,” Jain said. This, he claims, saves customers both money and space, and results in a system that is “much more reliable and simpler to manage.”

To that end, the vendor’s ISR9288 flagship offering provides 288 non-blocking ports in its 14U chassis. Non-blocking ports allow each node to communicate with others “at full bandwidth,” Jain said. This cuts down on cabling requirements, which cuts in turn cuts costs.

At press time, Voltaire had just announced virtualization-enabling companion IP and Fibre Channel routers as well as an Infiniband-optimized NAS solution.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories