[Info-vax] VMS QuickSpecs
Dirk Munk
munk at home.nl
Fri Aug 14 05:48:44 EDT 2015
Stephen Hoffman wrote:
> On 2015-08-13 20:09:30 +0000, Dirk Munk said:
>
>> FCIP is a protocol to tunnel FC over long distances. IP over FC is in
>> principle the same as IP over ethernet.
>
> Yes, and those are the same links where you'd want to have IPFC.
> Locally, there's rather less of a requirement for FCIP, as 10 GbE and 40
> GbE switches and networks are usually available.
No, IPFC is not at all for long distance interconnects. That would meen
you would have very long distance firbechannel interconnects. FCIP on
the other hand is used to tunnel FC over long distance IP connections.
If you already have a fibrechannel infrastructure available, and if your
systems already have fibrechannel connections, then using IPFC becomes a
very different matter.
IP over fibrechannel can not be routed. You always use zones in
fibrechannel. with IPFC a zone becomes a very effective, fast and secure
VPN.
Suppose you have two (or more) servers communicating very intensively
with each other. Put them in one zone (or two, since you will have two
fibrechannel networks) (you can even use virtual HBAs) and you will have
an extremely fast, secure and efficient IP interconnect at zero costs.
This is the reason that I would like to see network communication over
fibrechannel for VMS. You can use it for IP, clustering, perhaps even
DECnet.
>
>> I tried on Solaris, it was extremely fast. With IPv6 you can have a
>> packetsize of 16MB, not much protocol overhead.
>
> Apollo had gonzo fast networking, and look where they ended up.
>
>> I meant fibrechannel as such, not the VMS implementation. Fibrechannel
>> was supposed to be the successor of FDDI.
>
> Many products are supposed to be the successor to many products. Those
> plans don't always work out.
>
> Irrespective of FCIP, FC itself does not appear to have a particularly
> robust future, particularly given the encroachment of cheaper and
> usually "good enough" storage hardware below, and given gonzo-grade
> InfiniBand storage above. (I'd expect scale-out over scale-up too, but
> you're a far firmer believer in the efficacy and applicability of
> ginormous servers than I am.)
Fibrechannel is very robust, big companies like it. 128Gb/sec
firbechannel is on the roadmap. What you see with ethernet is that
storage over ethernet uses its own interfaces, it will not use the same
interfaces as used for other connections. A typical x86 server has many
ethernet ports, all with a different function.
I absolutely like infiniband, but is still is a bit of a niche product.
In my view, fibrechannel will stay, and will be the dominant storage
interconnect in big datacenters until there is a very different way to
connect to (flash) storage.
>
> <http://www.mellanox.com/related-docs/whitepapers/WP_Scalable_Storage_InfiniBand_Final.pdf>
>
>
> Whether the VMS I/O stacks — whether we're discussing the storage or
> networking stacks, or the file system itself — can even go fast enough
> to reasonably deal with this stuff is an open question, too.
>
> But all this is fodder for VSI.
>
>
More information about the Info-vax
mailing list