[Info-vax] VMS QuickSpecs

Dirk Munk munk at home.nl
Thu Aug 20 08:54:57 EDT 2015


Stephen Hoffman wrote:
> On 2015-08-14 09:48:44 +0000, Dirk Munk said:
>
>> No, IPFC is not at all for long distance interconnects. That would
>> meen you would have very long distance firbechannel interconnects.
>> FCIP on the other hand is used to tunnel FC over long distance IP
>> connections.
>
> If you want to use a more expensive and more limited and
> harder-to-manage infrastructure in preference to a much faster and
> cheaper and more flexible one, I'm sure there are vendors that will
> happily accept your money.

Harder-to-manage? Have you ever managed an FC network? I have, It is 
very easy. Create a zone, add both end point of the zone (server HBA and 
storage port) and bingo, you're ready. The underlying FC network does 
the rest, it is completely self-configuring. Setting up the network 
itself is also very easy.

>
>> If you already have a fibrechannel infrastructure available, and if
>> your systems already have fibrechannel connections, then using IPFC
>> becomes a very different matter.
>
> How many of those folks lack speedy Ethernet networks?

So what? the one doesn't exclude the other.

>
>> IP over fibrechannel can not be routed. You always use zones in
>> fibrechannel. with IPFC a zone becomes a very effective, fast and
>> secure VPN.
>
> What'd be called a VLAN on the Ethernet infrastructure.

Maybe, but FC has VLANs too.

>
> When might FC have architected autoconfiguration and autodiscovery
> support, and protocols that can be routed?

I don't know, but I don't regard IPFC as an general ethernet 
replacement. A typical use could be a connection between an application 
server and a database server. Because of the huge default MTU (appr. 
64kB) and a maximum MTU of even 16MB with IPv6 (if I remember 
correctly), the result of a database request often fits in one IP 
packet. I don't have to explain to you that handling a packet takes a 
lot of computing power, the size of the packet doesn't matter really. 
You don't even need TCP, FC in itself already takes care of guaranteed 
delivery.

>
>> Suppose you have two (or more) servers communicating very intensively
>> with each other. Put them in one zone (or two, since you will have two
>> fibrechannel networks) (you can even use virtual HBAs) and you will
>> have an extremely fast, secure and efficient IP interconnect at zero
>> costs.
>>
>> This is the reason that I would like to see network communication over
>> fibrechannel for VMS. You can use it for IP, clustering, perhaps even
>> DECnet.
>
> DECnet is dead.  FC is not dead, but it's not looking particularly
> robust — and IPFC certainly isn't very common.  Hopefully unencrypted
> and unauthenticated clustering will eventually be dead.
>
>> Fibrechannel is very robust, big companies like it. 128Gb/sec
>> firbechannel is on the roadmap.
>
> VMS HBA support tops out at eight GbFC, last I checked.   Hopefully that
> gets corrected.

Can't be difficult, the 8Gb/s adapters and the 16Gb/s adapters are both 
Qlogics, most likely there's hardly any problem adapting the driver for 
the 16Gb/s models.

>
>> What you see with ethernet is that storage over ethernet uses its own
>> interfaces, it will not use the same interfaces as used for other
>> connections. A typical x86 server has many ethernet ports, all with a
>> different function.
>
> Re-read that.  Let that sink in for a moment.  Ethernet uses the same
> infrastructure, even if it's dedicated and can be segmented.  Same
> management tools, services and rest.  Cheaper and ubiquitous hardware.
> Or VSI can spend time on something that few will use — IPFC — on a
> platform that's not really looking all that great long-term — FC — and
> all to potentially send completely untrusted networking activity into
> servers that can sometimes barely manage to deal appropriately with the
> intended and trusted block storage traffic.  Who knows where that'll end
> up.

For the type of servers and applications you would use FC for, the extra 
costs of FC don't really count. It is far easier to manage then IP 
switches. And FC is low latency, ethernet isn't, and certainly not with 
SCSI over a TCP/IP stack. It's rather silly to ask for flash in you 
storage network, and then move to a network type that increases latency!

I once designed an iSCSI ethernet SAN, and I used the same principles as 
with FC. Two separate networks for redundancy and load balancing, and 
the switches were completely stand-alone. The purpose was to service 
important but low performance applications.

>
>> I absolutely like infiniband, but is still is a bit of a niche product.
>
> Fibre Channel networking isn't niche?   Sure, FC is a bigger niche than
> InfiniBand by all appearances, but it's still a niche.

These days VMS and Itanium are much more a niche then FC. I suppose you 
will advise VSI to stop working on VMS?

>  But within FC,
> IPFC networking isn't a very big niche, at all.

Last time I looked Windows supported IP over FC. And it doesn't matter, 
as I would use IPFC you don't need support on the FC network side. In a 
*bridged* ethernet network, you can use any type of ethernet protocol 
you like, who cares?

>
> For the hardware vendors, the low-end gear gets the volume — iSCSI or
> not, services including CIFS and NFS — and the high end — InfiniBand —
> gets higher revenues per unit.
>
> Yes, FC storage will be around for a while, due in no small part to the
> installed base.  But I'd rather see VSI spend time supporting what the
> marketeers are calling converged networking and on related newer bits —
> offloading TCP and iSCSI — and faster HBA support for those stuck on FC,
> and working on SSD and inboard flash storage and not stuff that was
> invented for rotating rust.

I would say that VMS already had 'converged storage' long before this 
buzz word became popular. MSCP etc.?

And yes, TCP and iSCSI offload would be very welcome too! We already 
discussed that some time ago as you will remember.

Flash should also be supported, we discussed TRIM some time ago. How 
about DECRAM on DIMM units with a flashmodule? Shadow a DECRAM disk to a 
real flash/HDD disk?

>
> I expect folks that have FC almost certainly already also have and need
> speedy Ethernet switches for other uses, so moving a few OpenVMS hosts
> off of the more speedy Ethernet NICs — NICs that aren't yet supported by
> OpenVMS — over to speedy FC HBAs — HBAs that are not yet supported by
> OpenVMS hosts — using IPFC support — which is neither widely used nor
> widely deployed anywhere — doesn't seem the most auspicious use of the
> time and effort involved.
>
>> In my view, fibrechannel will stay, and will be the dominant storage
>> interconnect in big datacenters until there is a very different way to
>> connect to (flash) storage.
>
> That connection will likely be either whatever mechanisms that which is
> called Ethernet is running atop this year, or InfiniBand.  Sure, there
> are folks that will have legacy Fibre Channel configurations around, but
> wasting time adding nice-to-have but not-necessary features for corner
> case configurations — folks that completely lack 10 GbE or 40 GbE or
> faster networks, but have really fast FC — in a
> usually-partitioned-for-security-reasons infrastructure is not the
> direction I'd head.
>
> IPFC has parallels to networking via CI.   That was a curiosity, but not
> something particularly nor widely useful as Ethernet rolled out everywhere.
>

CI is indeed what I was thinking about too with IPFC. However in the era 
client-server applications were not very common, these days they are 
common. A dedicated very fast low latency FC link in such a setup is not 
such a strange idea.




More information about the Info-vax mailing list