[Info-vax] VSI and Process Software announcement
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Fri Sep 23 16:05:55 EDT 2016
On 2016-09-23 18:49:54 +0000, Paul Sture said:
> On 2016-09-23, David Froble <davef at tsoft-inc.com> wrote:
>> Bob Koehler wrote:
>>> In article <32d1c9c9-97a7-45ca-9f90-f769da2a53c7 at googlegroups.com>,
>>> IanD <iloveopenvms at gmail.com> writes:
>>>> I assume when they mean 'older stack' they are meaning TCPIP Services?
>>>>
>>> Yes, the one many of us still stubbornly refer to as UCX, is what I
>>> assumed, too. What other 'older stack' would VSI have, that they could
>>> plan its demise?
>>
>> Got to wonder why you are stubborn. UCX was the product prior to TCP/IP V5.
>> The TCP/IP V5 stuff came from the T64 product. Not UCX.
UCX was the prefix. The product name has been TCP/IP Services for a
very long time. There's all sorts of history and baggage here, too —
both organizational and technical...
We're probably also going to get to go through yet another facility
prefix change with this new VSI work, but that's fodder for some future
discussion...
> The user interface stayed much the same as UCX V4 though.
>
> When I first heard that the Tru64 version was being "imported" I was
> kind of hoping for a better interface, albeit not one "too Unixy", but
> that didn't happen.
Looking forward to 2020 and beyond, rather than looking at back...
Get DHCP working out of the box. OpenVMS boots up, requests a DHCP
address, and allows (only) remote-management connections, and an ssh
server. Generate a system password based on the server serial number.
That's available on Itanium. Maybe use the MAC address if there's
no serial number set or if the x86 box has no serial number, but that
ends up being far too obvious. This to allow full remote management
on first boot, right after installing the bits. Get SNMPv3 working,
and also certificate-protected TNT/Argus, and ssh as part of this
configuration. (Want to play in the cloud, or in a VM or a hosted
data center or such? You can't always assume and can't depend on
having a hardware console, and remote access to same...)
Load common root certificates as part of the install. Mozilla via
curl or however y'all choose to do that. Preferably also one set of
directories for certificates, and not having them scattered around
Apache and SSL and SSL1 and who-knows-where-else...
Always create all server directories, using consistent and reserved
UICs. Offer to migrate the existing morass over to consistent users
and UICs during an OpenVMS upgrade. There's no point in not creating
all of this stuff, and no reason not to load templates or whatever
other pieces are needed. Allow the system manager the choice of which
services to start or not, via a consistent command interface and also
available via a callable API. But don't add all the complexity of
checking for directories and users and the rest, both for the VSI folks
and services, and for the ISVs and developers. It's always there,
it's always consistent. We aren't on VAX boxes and we aren't on 456MB
disks, after all.
(This UIC mess shouldn't even exist. It's a throwback to RSX-11M and
a very old and very limited world-view. Use UUIDs here. This avoids
most of the messes of UICs, including collisions. Allow users and
applications and application bundles to have associated UUIDs. UUIDs
and work toward containers properly applied also get away from the mess
that is facility prefixes. Obviously out of scope for IP work. This
is part of the authentication overhaul, and toward more modern and
modular application management.)
Install Apache as part of the base distro.
Install LDAP and particularly LDAP server as part of the base distro.
This in preparation of hauling the whole cluster management and
authentication morass forward to this millennium.
Get the configuration morass under control. That isn't a combination
of a command-line tool, a DCL menu, and a plethora of rustic, artisanal
configuration files. Pick one, preferably a replacement for the
command line tool. The menu morass is for a variety of reasons, but
then there's also no in-built menu-generation tools. Allowing OpenVMS
users to avoid having to roll their own and wildly inconsistent menus —
some use ^Z, some use QUIT, some EXIT, some use 9 or 99, some others
use who-knows-what-else. I don't care if this is DCL — well, I'd
prefer it not be or not require DCL — or some other scripting language,
or some other tool. But the lack of this interface means that
everybody does configuration tools differently...
Get ftp and telnet out of the default configurations and menus, and
make folks work to enable those, and any other insecure transports,
services or tools. Lead folks to ssh, sftp, and related. Get DECnet
out of the default install path, and for the same reasons. Don't
enable stupid choices.
Disable all network services that are incompatible with DHCP, whenever
DHCP is enabled.
Make IP cluster aware, such that you're really configuring the cluster
akin to one host — there are still some advantages to clustering on
OpenVMS, but the scatter-shot cluster management stinks and definitely
makes the whole clustering implementation far less desirable —
including automatically finding and sharing configuration data. (This
likely ties into LDAP, though there are other ways to do this.)
Don't use RMS indexed files for TCP/IP Services configuration data.
Use SQLite databases or something that makes particularly rolling
upgrades less complex, less constrained, and less hack-ish. Preferably
use one file, not the usual blizzard of files. Maybe give OpenVMS
directories that specifically store configuration data for
cluster-wide, system-wide, application-specific and user-specific
activities, but that's quite possibly out of scope of any IP stack work.
That's off the top... There's rather more work here, to bring OpenVMS
up to what's increasingly expected, and what will be expected 2020 and
beyond...
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list