[Info-vax] State of the Port - July 2017
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Thu Jul 20 17:37:44 EDT 2017
On 2017-07-18 21:27:18 +0000, Scott Dorsey said:
> Infiniband is designed for low latency. If what you need is the lowest
> possible latency, Infiniband is likely a big win over ethernet. If you
> need fastest throughput for bulk transfers, ethernet is likely a big
> win for you instead.
Ethernet is reaching well up into the same market Infiniband is aimed
at, and VSI is going to want to and need to go after better Ethernet
support to start with as it's far more broadly applicable. Once the
x86-64 port is out and VSI has 40 GbE and 100 GbE and other related
support available, then maybe adding Infiniband support gets
interesting. This if there's enough of an advantage over then-current
Ethernet and then-current Infiniband.
Some related reading, both for and against...
https://www.nextplatform.com/2015/04/01/infiniband-too-quick-for-ethernet-to-kill-it/
http://www.chelsio.com/wp-content/uploads/2013/11/40Gb-Ethernet-A-Competitive-Alternative-to-InfiniBand.pdf
https://www.nas.nasa.gov/assets/pdf/papers/40_Gig_Whitepaper_11-2013.pdf
For some of the discussions of why supporting faster Ethernet can
involve kernel performance and tuning issues, here's a
previously-posted discussion from the Linux kernel:
https://lwn.net/Articles/629155/
If VSI does decide to go after HPC with OpenVMS, then maybe we see
Infiniband support added. But Ethernet is ubiquitous.
And yes, Infiniband is interesting, and clustering over Ethernet RDMA
(iWARP) might well be patterned after the Memory Channel work, but
there's a bunch of stuff in the queue ahead of iWARP and Infiniband.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list