[Info-vax] VMS QuickSpecs

Dirk Munk munk at home.nl
Thu Aug 20 11:21:53 EDT 2015


Stephen Hoffman wrote:
> On 2015-08-20 12:54:57 +0000, Dirk Munk said:
>
>> Stephen Hoffman wrote:
>>> On 2015-08-14 09:48:44 +0000, Dirk Munk said:
>>>
>>>> No, IPFC is not at all for long distance interconnects. That would
>>>> meen you would have very long distance firbechannel interconnects.
>>>> FCIP on the other hand is used to tunnel FC over long distance IP
>>>> connections.
>>>
>>> If you want to use a more expensive and more limited and
>>> harder-to-manage infrastructure in preference to a much faster and
>>> cheaper and more flexible one, I'm sure there are vendors that will
>>> happily accept your money.
>>
>> Harder-to-manage? Have you ever managed an FC network? I have, It is
>> very easy. Create a zone, add both end point of the zone (server HBA
>> and storage port) and bingo, you're ready. The underlying FC network
>> does the rest, it is completely self-configuring. Setting up the
>> network itself is also very easy.
>
> Yes, I've managed Fibre Channel.   (Wasn't that obvious?)

Not if you say it's difficult.

>
> Can you manage IP over FC using IP tools?  No.

What do you want to manage please?

>
> Can you use IP switches and routers?  No.

True, and like I explained, no need for that. I don't see it as a 
universal replacement for ethernet.

>
> Can you use IP knowledge?  No.

What kind of knowledge?

>
> Can you use existing Ethernet infrastructure and existing management
> tools?  No.

Again, what do you want to manage if in many cases you will have nothing 
else as a virtual point-to-point connection?  Both HBAs will see nothing 
else as each other.

>
> Can you use IP over FC everywhere, and avoid replicating
> infrastructure?   No.

Why not if you have an FC network present? FC doesn't care if it 
transports SCSI or IP, just like ethernet doesn't care if it transports 
IP, DECnet, IPX, LAT, ow whatever.

>
> Which means...  It's harder, and it's more expensive.

No, it's not. For the test that I did, I just configured two HBAs with 
IP, set up a zone, and ready. Didn't cost me a penny. Didn't take me 
more then an hour to find the documentation and do the settings.

>
> Oh, and then there's that the the FC user interfaces are, um, lacking.
>

Depends on the OS and the supplied drivers I suppose.

>
>>>> If you already have a fibrechannel infrastructure available, and if
>>>> your systems already have fibrechannel connections, then using IPFC
>>>> becomes a very different matter.
>>>
>>> How many of those folks lack speedy Ethernet networks?
>>
>> So what? the one doesn't exclude the other.
>
> If cost and cabling and training is no factor, sure.

Like I said, I assume the systems are already connected to FC for their 
storage. The FC people don't have to learn anything, (I just requested a 
zone), the server people have to learn how to set up IP on a HBA. Not 
rocket science, believe me.

>
>>>> IP over fibrechannel can not be routed. You always use zones in
>>>> fibrechannel. with IPFC a zone becomes a very effective, fast and
>>>> secure VPN.
>>>
>>> What'd be called a VLAN on the Ethernet infrastructure.
>>
>> Maybe, but FC has VLANs too.
>
> Now we have more complexity and more cost and more training, and for no
> particularly obvious network benefits beyond the Ethernet that I already
> have.

My experience was that it was extremely fast, even faster then writing 
to a memory disk. I did a copy operation from one memory disk to the 
other, and a copy operation to the null device. The last one was much 
faster then the first one, so the memory disk write operations could not 
keep up with the speed of the copy operation. FC HBAs have lots of 
memory and very powerful cpus.

>
>>> When might FC have architected autoconfiguration and autodiscovery
>>> support, and protocols that can be routed?
>>
>> I don't know, but I don't regard IPFC as an general ethernet
>> replacement. A typical use could be a connection between an
>> application server and a database server. Because of the huge default
>> MTU (appr. 64kB) and a maximum MTU of even 16MB with IPv6 (if I
>> remember correctly), the result of a database request often fits in
>> one IP packet. I don't have to explain to you that handling a packet
>> takes a lot of computing power, the size of the packet doesn't matter
>> really. You don't even need TCP, FC in itself already takes care of
>> guaranteed delivery.
>
> So now I have to deal with some other protocol that's not TCP?  That
> lacks autodiscovery and autoconfiguration?

UDP perhaps?

>
> There'll still need to be a cluster communications port emulator driver
> either reworked or wholly written, as you're going to want clustering
> here, too.

The guaranteed delivery is a very nice feature, you don't have to worry 
about that. After all SCSI has no error correction either. So it could 
be a rather simple driver I assume, and very low latency. Nice for 
cluster communication wouldn't you say?

>
>> CI is indeed what I was thinking about too with IPFC. However in the
>> era client-server applications were not very common, these days they
>> are common. A dedicated very fast low latency FC link in such a setup
>> is not such a strange idea.
>
> We'll clearly disagree then, as I see the addition of IP networking over
> Fibre Channel as a waste of time, effort and money for VSI, and not
> something that will be a big draw for the existing OpenVMS installed
> base nor for potential new deployments of OpenVMS.   If end-user costs
> and training and infrastructure duplication are no object and if
> development costs were low and wouldn't offset other support I'd rather
> see available and if Ethernet weren't already faster and far more
> ubiquitous — that's three very big ifs — then I might be more interested
> in this.   But given budgets and staff and time are not infinite
> resources, then support for more 10GbE and 40GbE Ethernet NICs and the
> 100GbE as they arrive — I'm already effectively required to have IP via
> Ethernet configuration in almost any environment, after all — and
> existing FC storage support updated for faster HBAs, maybe also
> InfiniBand support for high-end storage, and (much) faster networking
> networking stacks, now I see that as aiming OpenVMS forward, and rather
> more interesting.   IP over FC, not so much.
>




More information about the Info-vax mailing list