[Info-vax] HP SAN switch question

Kerry Main kerry.main at backtothefutureit.com
Thu Feb 18 11:31:15 EST 2016


> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at info-vax.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: 18-Feb-16 10:55 AM
> To: info-vax at info-vax.com
> Cc: Stephen Hoffman <seaohveh at hoffmanlabs.invalid>
> Subject: Re: [New Info-vax] HP SAN switch question
> 
> On 2016-02-18 08:55:10 +0000, Hans Vlems said:
> 
> > Ha, may I infer from that last sentence that you are not a fan of FC
> > technology Hoff?
> 
> FC works, for what it does.   it can work well.   It's that most
> OpenVMS FC SAN gear is old and slow and hot and variously crufty.
> 
> Then there is the spectacularly problematic nature of the typical FC
> SAN management interface.   But I'm being polite.
> 
> As for speed, the OpenVMS FC HBA support and the rotating rust
> storage
> bas all badly fallen off the performance curve.
> 

Let's not forget that the vast majority of all systems in prod today on
All platforms do not need 16GB adapters.  Most systems on all platforms 
tend to run below 30% at peak times. 

Yes, of course there are some higher end systems that do need the higher
throughput, but these tend to be bigger enterprise high end systems.

Having stated this, from the latest VSI roadmap, 16GB FC adapters for
OpenVMS are on the roadmap for 2016/2017.

Also the 3PAR 8200 flash array's are on the roadmap for support in 
OpenVMS V8.4-2 (field test ended last week). 

> For FC, 16 Gb HBAs are presently the upper limit.   HPE only supports 8
> Gb FC HBAs with OpenVMS too, when last I checked.  No published
> support
> for the SN1000Q/B9F24A/B9F24A, etc.   (Not that anybody can find
> anything at the HPE web site these days, either.)    That's in
> comparison to 40 GbE NICs.
> 

Watch for HP-UX and Non-Stop to similarly get less and less attention 
from HPE. HPE is going full tilt forward with ProLiant X86-64 and seems
to be going back to its core focus of a HW vendor with some various
management components added on.

> Yeah, FC has a lower bit error rate which does make up for (some) of
> the speed differential.   But 40 GbE NICs are readily available for
> those that need the bandwidth.
> 
> Actually keeping a fast NIC busy won't be easy for an operating system,
> either — https://lwn.net/Articles/629155/  — nor is OpenVMS
> particularly known for its I/O stack performance.
> 
> As for recent 3PAR FC storage — which you won't find in many OpenVMS
> configurations, though at least some of the 3PAR gear is supported with
> OpenVMS — there's the low-end 3PAR 8000 series with 24 x 16 Gb FC
> ports.   Which are actually very fast arrays.  But you need to use lots
> of FC ports to get there.   And OpenVMS can only sip on that tsunami of
> data through its 8 Gb FC HBA soda straw, AFAIK.
> 
> So... What FC gear you usually see with OpenVMS tends to be old and
> slow and...
> 
Again - see above for note that even though very few Cust environments 
need 16GB FC adapter throughput, these are on the VSI roadmap.

Lots of ancient history here, but under HP, all of the enterprise OS's did
to not get the same attention from the Storage groups like Windows or
Linux. Heck, when HP bought 3PAR, it did not even support HP-UX. There
as a mad dash to get HP-UX certified by the BCE (aka HP-UX Eng) group
at that time. OpenVMS cert took longer because of internal priorities.

Imho, once OpenVMS is available on ProLiants, HPE will become more
interested in OpenVMS as HPE will not care what OS is running as long
as it has ProLiant on the front of the server.

[snip]

Regards,

Kerry Main
Kerry dot main at starkgaming dot com





More information about the Info-vax mailing list