[Info-vax] IS everyone waiting?
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Thu Oct 20 11:30:08 EDT 2016
On 2016-10-20 14:16:07 +0000, Kerry Main said:
> When any OpenVMS customer uses common system disks in a cluster, the
> recommendation is to setup a common non-system disk between the OS
> instances to share common files. That's a decades old best practice for
> the OpenVMS community.
I really like clustering. Host-based volume shadowing (HBVS) is one
of the most unique features that's still left here, too. But
clustering and shadowing and related is a complete dumpster file to set
one up, it's a pain to upgrade, and dependencies on 2000s-era and
earlier technical management isn't the path forward to improved
satisfaction and sales. Particularly if anyone is looking at the
competitive products, and where the competitive products will be in
~2021 or ~2026.
> Reference sect 11.3:
> http://h41379.www4.hpe.com/doc/82final/6318/6318pro_020.html
>
> While automating this might seem like a benefit, many Cust's would also
> argue that because of all the various custom config's, this is better
> left to the local SysAdmin.
Utter disaster, that. Absurdly complex, failure-prone and then
there's the wonderful hassle to deal with patches and upgrades given
the need to copy files back to the disks. Oh, and the complexity of
the configuration involved is categorically moronic.
> Setting up a OpenVMS cluster has lots of benefits, but like every OS
> platform, OpenVMS clusters do require more planning and work to setup
> and configure than an individual system.
Sure, if we're accustomed to the idiocy of user-hostile and half-assed
user interfaces, then there's the complete lack of a whole pile of
features such as distributed scheduling, distributed logging,
distributed coordination, distributed management, system and
performance tuning and data collection, LDAP and directory integration,
and then there's the case where you might want to manage several
clusters as one — configurations for lower latency or geographic
distribution, for instance — and even an experienced system manager can
have "fun" with that. If you're consolidating several OpenVMS servers
into a cluster — app stacking, sans sandboxes or such — you're also
handed the "fun" of rationalizing and coordinating all the UICs and
usernames and identifiers, too — migration into a cluster has always
been a project, and one that tends to get ignored. (The utter clown
shoes idea of randomly picking different UICs for TCP/IP Services and
different selections of usernames that may or may not be present is an
ongoing source of entertainment for these projects, too.) Then there's
the lack of job management and volume management, and the "fun" of
getting applications stopped and the volumes dismounted cleanly, which
is particularly "interesting" for folks with HBVS seeking to avoid
unnecessary copies and merges. Or of using the distributed lock
manager for what it can really do, or getting connection security
working via TLS and authentication, or a whole host of other
developer-level issues and discussions.
TL;DR: hand a very OpenVMS-experienced user a pile of tools and logical
names and randomly-named files and top it all off with a dollop of
add-on HPE/VSI tools, locally-written or third-party-sourced software
and tools for coordinating processes and applications and code, and
read a disparate and scatter-shot stack of product and patch and
related documentation, and clustering and HBVS and the the distributed
lock manager and such a works. It all works well. Getting to a
working configuration is absurdly complex, and failure-prone.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list