- 1 Vapor IO Brings OpenDCRE to General Availability
- 2 VMware Takes the Wraps Off vRealize Automation and vRealize Business
- 3 Microsoft Previews Hyper-V Containers for Windows Server 2016
- 4 Mirantis Led FUEL Project Gets Installed Under OpenStack Big Tent
- 5 Red Hat Enterprise Linux 7.2 Adds Security, DR Features
Hardware Today Cutting Through the Infiniband Buzz Page 2
The Downside of Infiniband
"The real problem for Infiniband is: How good is good enough?" Gartner Research Vice President James Opfer told ServerWatch, "There's a certain market available for clustering, and it's arguably true that Infiniband does that better, but Ethernet may do it good enough." Although Infiniband has generated increased interest in the past six to nine months, he adds, it "hasn't penetrated [the market] in a major way as of yet."
Gartner does not break out Infiniband-specific market data at this time but believes the technology has its work cut out to push Fibre Channel out of the server closet. Opfer also anticipates Fibre Channel will be alive and kicking through 2010.
Further, Infiniband's performance equation isn't a sure bet down the road. "Infiniband doesn't seem to be a huge risk right at the moment, but it's not like Ethernet for long-term durability," Opfer said, and when it arrives, "if you're going over backplanes, 10 GB Ethernet will be about the same as 10 GB Infiniband."
In fact, slower components could keep a data center from noticing the 10 GBps speed boost, in particular PCI and PCI-X, whose relatively low speeds can prevent the realization of such a boost. PCI Express, which is built specifically to keep up with emerging network protocols like Infiniband and provides up to 2.5 Gbps, addresses this.
Further marring the landscape have been the changes in Infiniband's development. Early on, Infiniband was touted as a radical channel architecture replacement for the PCI load and store architecture internal to servers. Had it succeeded, it would have prevented these bottlenecks.
"They were going to go all the way to the processor with Infiniband," Opfer notes, "and that hasn't happened."
Once Infiniband resigned itself to external, inter-server communications conveyed over internal PCI architectures via an HCA, "the game was over. The revolution was over from that point on, it was just a different implementation of load and store architecture," Opfer eulogized.
Originally, a different funeral had been planned. "They were saying that PCI's dead, that this was going to be the follow-on to PCI, and it's not, it runs on [PCI or PCI Express], it doesn't sound like it's displaced PCI to me," he said.
Infiniband's advantage stems from functions like RDMA, transport offload, and high speed compelling attributes that seem more a matter of reform than revolution. In the data center, however, this makes for sensible choices.