[Info-vax] HP Integrity rx2800 i4 (2.53GHz/32.0MB) :: PAKs won't load
David Froble
davef at tsoft-inc.com
Thu Feb 18 15:54:50 EST 2016
Phillip Helbig (undress to reply) wrote:
> In article <na4s9b$2cc$1 at dont-email.me>, Stephen Hoffman
> <seaohveh at hoffmanlabs.invalid> writes:
>
>> Ponder this, VSI folks... Brian is experienced with OpenVMS. Very
>> experienced. When the most experienced users are getting bagged by
>> these cases, maybe there are problems, and maybe there is room for
>> improvement?
>>
>> Forming a multiple system disk cluster needs to shuffle ~25 files
>> (OpenVMS and IP, plus any ancillary cluster-aware data) and to ensure
>> that the contents and records and security are all synchronized across
>> all member hosts, and to be compliant with the supported
>> configurations I now have to shuffle those ~25 files back to the
>> system disks for any patches and upgrades, and then review and relocate
>> and resynchronize the results.
>>
>> This mess is not a user interface. This is not a design. This is a
>> 25-file pile of short-term and expedient and incremental hackery,
>> handing the resulting mess to the system managers and the support teams
>> to deal with.
>
> I have to agree with Hoff here. To me, the big advantage of a cluster
> is to have the possibility to use common files such as SYSUAF and so on.
> The logical names for this are documented. Accounding and the audit
> server are similar but somewhat different. Then there is TCPIP stuff.
> I thought I had that down pat, then I discovered that SSH has its own
> default disk and directory, which are by default on the system disk.
>
> The basic problem here is that the original design was for a boot server
> and some satellites. At least that is how it appears to me. So,
> SYS$COMMON is stuff common to all the nodes. So far, so good. But add
> a second system disk and in SYS$COMMON you have stuff common to all
> nodes booting from that disk and the VMS stuff which is the same on all
> system disks in the world with that version of VMS. So, it is used for
> two different purposes. One wants the ability to have separate VMS
> installations, for redundancy, for rolling upgrades, and so on. But
> usually one wants SYSUAF and so on the same on all nodes. So, move it
> off the system disk. Supported, yes; documented, yes. But it has to be
> done by hand. Then the other stuff mentioned above.
>
> Once such a disk is customized, one can't just do a fresh install
> without re-customizing it. If one has documented what one has done, OK,
> but that is extra work, and new stuff has to be folded in as it comes
> in.
>
> One really needs a search list which would be SYS$SPECIFIC, then some
> cluster-common area, the SYS$COMMON. A bit strange since the first and
> third are on the same disk, but there you go. I think redefining
> SYS$SYSROOT like this is probably not supported. If this worked, then
> one just has to put the common stuff on the non-system disk and it will
> be found automatically. There could be a special file which contains
> one line, the definition of a logical to point to this disk. This is
> then the only thing which would need changing after a completely fresh
> install. There is something attractive about upgrading the system disk
> and being able to see the history etc, but if something doesn't work,
> probably the only way one could expect help would be to do a fresh
> install.
>
> I recently bought an iPad Pro, primarily because it is very close to A4
> size and hence well suited to displaying stuff designed to be printed on
> A4 paper, so it's a practical mobile PDF reader. It's also useful to
> have mobile internet these days, for various reasons. I just had an OS
> upgrade, which took just a few minutes. :-) I don't think that VMS
> should go this far, but some design improvement would be nice. (On the
> other hand, I noticed that my DECterm was not very responsive. I then
> turned off the WLAN on the iPad and now things are back to normal. :-(
> There was nothing running on the iPad, but apparently it does a lot of
> stuff in the background. The iPad talks to an access point I bought
> today which goes to the same LAN the VMS cluster is on, then to a
> 17.7/1.2 Mb/s ADSL connection. (I'll probably upgrade to 50/10 or
> something in the next few days---not because of the iPad; I've had it on
> the list of things to do for a while.) I bought a new access point
> since when configuring the iPad it connected to the internet without
> asking for a WiFi password. I had bought an access point about 5 years
> ago for my wife's iPad 2, which had worked fine. Why it was open for
> all I don't know. It's static IP address had also changed. I managed
> to reconfigure it from scratch with an installation CD, set things up
> like it was before (static IP address, custom password, MAC filtering,
> strong encryption, management access only over LAN---the works) and an
> hour or two later it was back to the open configuration. Then I
> reconfigured it again from scratch, did a firmware upgrade, and things
> worked properly, but after another couple of hours it had reverted to
> being insecure and I couldn't reconfigure it with the installation CD.
> So either a bug (the most recent firmware was from 2011, which came out
> just after I had bought it; it had 2008 firmware on it) or someone had
> hacked into it. So, I decided a newer model, which will be supported
> for a while, was probably a better idea.) What VMS should do is clean
> things up, but not break existing installations.
>
> Sound impossible? Not really. Things CAN be set up properly now; it's
> just a lot of work, and with each new version of VMS one has to review
> one's setup. It would be nice to have a standard procedure which does
> all the necessary definitions, so all one needs to do is tell it the
> name of the disk where the off-the-system-disk stuff is to reside. This
> would make it easier for folks to do a rolling change from their custom
> job to the new standard. After a couple of years, when everyone could
> be using this, VSI could change things under the hood and we wouldn't
> have to care about the details anymore, since it would just work out of
> the box.
>
> But this should be a longer-term goal. The main goal now is VMS on x86,
> and it would be best to do that and leave everything else as it is, then
> move on to the next topic, and so on. Changing more than one thing at
> once makes debugging hard.
>
Caviet, I don't do clusters ....
However, reading about the problems, over, and over, and over, it appears to my
unknowledgeable self, that the problems is in having a subset of the OS stuff
that is sometimes desired in some common location. Sometimes with the rest of
the OS stuff, and sometimes not.
Even with Steve's idea of a single database to hold the pieces of data, you'd
still be looking at moving the database, and pointers to it, manually, at least
as things are today.
Regardless of whether it is a group of files, or a single file (database),
perhaps have that data as a separate part of the installation, with the
installer specifying the location of the common data. Along with a procedure to
move the data. Thus, all the grunt work would be avoided, and, upgrades would
not need the common data moved back to a system disk, and upgrades affecting the
common data would be a separate part of the total upgrade.
People are for doing unique things. Computers are for doing repetitive tasks.
This issue of the cluster common stuff sure sounds like repetitive stuff.
But what do I know ???
More information about the Info-vax
mailing list