[Info-vax] VMS QuickSpecs
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Sat Aug 15 08:34:00 EDT 2015
On 2015-08-14 09:48:44 +0000, Dirk Munk said:
> No, IPFC is not at all for long distance interconnects. That would meen
> you would have very long distance firbechannel interconnects. FCIP on
> the other hand is used to tunnel FC over long distance IP connections.
If you want to use a more expensive and more limited and
harder-to-manage infrastructure in preference to a much faster and
cheaper and more flexible one, I'm sure there are vendors that will
happily accept your money.
> If you already have a fibrechannel infrastructure available, and if
> your systems already have fibrechannel connections, then using IPFC
> becomes a very different matter.
How many of those folks lack speedy Ethernet networks?
> IP over fibrechannel can not be routed. You always use zones in
> fibrechannel. with IPFC a zone becomes a very effective, fast and
> secure VPN.
What'd be called a VLAN on the Ethernet infrastructure.
When might FC have architected autoconfiguration and autodiscovery
support, and protocols that can be routed?
> Suppose you have two (or more) servers communicating very intensively
> with each other. Put them in one zone (or two, since you will have two
> fibrechannel networks) (you can even use virtual HBAs) and you will
> have an extremely fast, secure and efficient IP interconnect at zero
> costs.
>
> This is the reason that I would like to see network communication over
> fibrechannel for VMS. You can use it for IP, clustering, perhaps even
> DECnet.
DECnet is dead. FC is not dead, but it's not looking particularly
robust — and IPFC certainly isn't very common. Hopefully unencrypted
and unauthenticated clustering will eventually be dead.
> Fibrechannel is very robust, big companies like it. 128Gb/sec
> firbechannel is on the roadmap.
VMS HBA support tops out at eight GbFC, last I checked. Hopefully
that gets corrected.
> What you see with ethernet is that storage over ethernet uses its own
> interfaces, it will not use the same interfaces as used for other
> connections. A typical x86 server has many ethernet ports, all with a
> different function.
Re-read that. Let that sink in for a moment. Ethernet uses the same
infrastructure, even if it's dedicated and can be segmented. Same
management tools, services and rest. Cheaper and ubiquitous hardware.
Or VSI can spend time on something that few will use — IPFC — on a
platform that's not really looking all that great long-term — FC — and
all to potentially send completely untrusted networking activity into
servers that can sometimes barely manage to deal appropriately with the
intended and trusted block storage traffic. Who knows where that'll
end up.
> I absolutely like infiniband, but is still is a bit of a niche product.
Fibre Channel networking isn't niche? Sure, FC is a bigger niche than
InfiniBand by all appearances, but it's still a niche. But within FC,
IPFC networking isn't a very big niche, at all.
For the hardware vendors, the low-end gear gets the volume — iSCSI or
not, services including CIFS and NFS — and the high end — InfiniBand —
gets higher revenues per unit.
Yes, FC storage will be around for a while, due in no small part to the
installed base. But I'd rather see VSI spend time supporting what the
marketeers are calling converged networking and on related newer bits —
offloading TCP and iSCSI — and faster HBA support for those stuck on
FC, and working on SSD and inboard flash storage and not stuff that was
invented for rotating rust.
I expect folks that have FC almost certainly already also have and need
speedy Ethernet switches for other uses, so moving a few OpenVMS hosts
off of the more speedy Ethernet NICs — NICs that aren't yet supported
by OpenVMS — over to speedy FC HBAs — HBAs that are not yet supported
by OpenVMS hosts — using IPFC support — which is neither widely used
nor widely deployed anywhere — doesn't seem the most auspicious use of
the time and effort involved.
> In my view, fibrechannel will stay, and will be the dominant storage
> interconnect in big datacenters until there is a very different way to
> connect to (flash) storage.
That connection will likely be either whatever mechanisms that which is
called Ethernet is running atop this year, or InfiniBand. Sure, there
are folks that will have legacy Fibre Channel configurations around,
but wasting time adding nice-to-have but not-necessary features for
corner case configurations — folks that completely lack 10 GbE or 40
GbE or faster networks, but have really fast FC — in a
usually-partitioned-for-security-reasons infrastructure is not the
direction I'd head.
IPFC has parallels to networking via CI. That was a curiosity, but
not something particularly nor widely useful as Ethernet rolled out
everywhere.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list