[Info-vax] State of the Port - July 2017
Kerry Main
kemain.nospam at gmail.com
Thu Jul 20 19:44:23 EDT 2017
> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: July 20, 2017 5:38 PM
> To: info-vax at rbnsn.com
> Cc: Stephen Hoffman <seaohveh at hoffmanlabs.invalid>
> Subject: Re: [Info-vax] State of the Port - July 2017
>
> On 2017-07-18 21:27:18 +0000, Scott Dorsey said:
>
> > Infiniband is designed for low latency. If what you need is the
lowest
> > possible latency, Infiniband is likely a big win over ethernet. If
you
> > need fastest throughput for bulk transfers, ethernet is likely a big
> > win for you instead.
>
> Ethernet is reaching well up into the same market Infiniband is aimed
> at, and VSI is going to want to and need to go after better Ethernet
> support to start with as it's far more broadly applicable. Once the
> x86-64 port is out and VSI has 40 GbE and 100 GbE and other related
> support available, then maybe adding Infiniband support gets
> interesting. This if there's enough of an advantage over
then-current
> Ethernet and then-current Infiniband.
>
> Some related reading, both for and against...
>
> https://www.nextplatform.com/2015/04/01/infiniband-too-quick-for-
> ethernet-to-kill-it/
>
> http://www.chelsio.com/wp-content/uploads/2013/11/40Gb-Ethernet-
> A-Competitive-Alternative-to-InfiniBand.pdf
>
> https://www.nas.nasa.gov/assets/pdf/papers/40_Gig_Whitepaper_11-
> 2013.pdf
>
>
> For some of the discussions of why supporting faster Ethernet can
> involve kernel performance and tuning issues, here's a
> previously-posted discussion from the Linux kernel:
>
> https://lwn.net/Articles/629155/
>
>
>
> If VSI does decide to go after HPC with OpenVMS, then maybe we see
> Infiniband support added. But Ethernet is ubiquitous.
>
> And yes, Infiniband is interesting, and clustering over Ethernet RDMA
> (iWARP) might well be patterned after the Memory Channel work, but
> there's a bunch of stuff in the queue ahead of iWARP and Infiniband.
>
The big advantages with Infiniband and ROCEv2 is not only large
bandwidth, but much, much lower latency which of course is perfect for
cluster communications.
Note that ROCEv2 is orders of magnitude better than RDMA V1 which was
what OpenVMS first looked at.
The V2 spec (2014 timeframe) allows one to maintain a great deal of
application / driver transparency which, in theory, means it might not
be that hard to adopt for OpenVMS.
Reference page 2:
<http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_cente
r.pdf>
ROCEv2 spec release in 2014 timeframe:
<https://www.youtube.com/watch?v=8kTAXhujn08>
Extract:
- Transparent to Applications and underlying network infrastructures (km
- question - how much effort to adapt for cluster comm's?)
- Infiniband Architecture followed OSI model closely
- RoCEv2 only modified third layer
- frames generated and consumed in the NIC (below API)
- enables standard network mechanisms for forwarding, management,
monitoring, metering, accounting, firewalling, snooping and multicast
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
More information about the Info-vax
mailing list