[Info-vax] VMS QuickSpecs

Dirk Munk munk at home.nl
Thu Aug 13 16:09:30 EDT 2015


Stephen Hoffman wrote:
> On 2015-08-13 15:41:17 +0000, Dirk Munk said:
>
>> The Quickspecs don't mention SCS (used to be called LAVC?)
>> specifically. For clarity, the ethernet part could be enhanced by
>> specifically mentioning SCS and IP.
>
> That's nothing new.   The HP OpenVMS SPD doesn't mention SCS, either:
>
> <http://h18000.www1.hp.com/info/SP4229/SP4229PF.PDF>
>
> It's arguably below the level of what's covered in the SPD or QuckSpecs.
>
> SCS is System Communications Services, it's the underpinnings of
> clustering.  Among other supported paths, clustering has operated via
> CI, NI (Network Interconnect; Ethernet) and DSSI as a communications
> interconnect, and operates via SCSI as a storage interconnect.
>
> LAVc Local Area VAXcluster support introduced the VAXport Port Emulator
> driver (PEDRIVER), which allowed clustering over Ethernet.   "VAXport
> drivers: In a VAXcluster environment, device drivers that control the
> communication paths between local and remote ports. (Examples are
> PADRIVER for the CI, PEDRIVER for the LAN, and PIDRIVER for the DSSI.)"
>
>> Fibrechannel clustering would be nice too, after all Fibrechannel is
>> in a way comparible with ethernet, and it can also be used for IP.
>
> That likely won't happen.  There used to be — haven't looked recently —
> too much variation among the different FC HBA interfaces to make
> supporting that approach be reasonably efficient.
>
> More recently, FCIP has become one of the typical  approaches for folks
> using FC — so you can tunnel FC over an IP network underneath, and since
> you have an IP network, then there's no need for IP over FC. (FCIP
> follows the same reasoning behind IPCI, too: consolidation onto IP
> infrastructure.)

FCIP is a protocol to tunnel FC over long distances. IP over FC is in 
principle the same as IP over ethernet. I tried on Solaris, it was 
extremely fast. With IPv6 you can have a packetsize of 16MB, not much 
protocol overhead.

>
> Toss into this discussion the arrival of server boards with embedded 10
> GbE NICs, of the availability of 10 GbE and 40 Gbps NICs, or of the
> recently-approved 100 Gbps 100GBASE stuff, why bother with FC?  If
> you're truly going high-end here, then that's usually InfiniBand.

I know, we had that discussion some time ago in this group. We all 
agreed Infiniband would be a very welcome addition for VMS.

>
> FWIW, SuperMicro is selling a PCIe board that does both 40 Gbps
> InfiniBand and 10 GbE, and also selling 1U server boxes with dual 40 GbE
> NICs.
> http://www.supermicro.com/manuals/other/datasheet-AOC-UIBQ-M2.pdf
> http://www.supermicro.com/white_paper/white_paper_ultra_servers.pdf
>
> ps: there's also the question of whether going replicated host-to-host
> starts to makes sense over server to shared storage controller, for some
> environments.  One HGST prototype recently showed 2.5 microsecond
> latency for server to remote server to remote NVMe SSD storage access
> across InfiniBand, after all.   For other "more traditional"
> configurations, use fast host-to-host for most stuff and use FC or
> 10GbE, 40 GbE or 100 GbE (as it arrives) iSCSI as the I/O bus.
>
>>  In fact it was used by IP before it was used by SCSI (for storage SANs).
>
> Not on VMS it wasn't.    SCSI is also not usable for host-to-host
> connections on OpenVMS, either.

I meant fibrechannel as such, not the VMS implementation. Fibrechannel 
was supposed to be the successor of FDDI.

>
> Now where VSI might go with any of this, we shall learn.   But it
> wouldn't surprise me to see little more than incremental new hardware
> support on OpenVMS I64 — comparatively little beyond getting Poulson and
> Kittson servers to boot — and dedicating most of the available
> engineering and hardware efforts to the x86-64 port and to the hardware
> and buses that are available in the target x86-64 environment.
>
>




More information about the Info-vax mailing list