[Info-vax] openvms and xterm
Dan Cross
cross at spitfire.i.gajendra.net
Mon Apr 22 22:07:00 EDT 2024
In article <v06t9m$fni$4 at tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gtaylor at tnetconsulting.net> wrote:
>On 4/21/24 21:37, Dan Cross wrote:
>> Sendmail.cf was hardly typical of most Unix configuration files,
>
>I'll argue that sendmail.cf or sendmail.mc aren't as much configuration
>files as they are source code for a domain specific language used to
>impart configuration on the sendmail binary. In some ways it's closer
>to modifying a Makefile / header file for a program than a more typical
>configuration file.
That seems like a distinction with little difference. Most
configuration files are in some format that can be considered a
DSL.
Regardless, I wouldn't consider sendmail's config stuff anywhere
analogous to a Makefile or header; more like APL source code
perhaps.
>> You may have a point, but to suggest that anyone who objects to systemd
>> doesn't "have an argument" or is reactionarily change averse is going
>> too far. There are valid arguments against systemd, in particular.
>
>Agreed.
>
>> I'll concede that modern Unix systems (including Linux systems), that
>> work in terms of services, need a robust service management subsystem.
>
>For the sake of discussion, please explain why traditional SysV init
>scripts aren't a service management subsystem / facility / etc.
Among other things, there's no ongoing monitoring of the state
of a service. Init scripts start a program, and maybe know how
to stop it, but that's about it. If it faults? You're kind of
on your own. There's limited support for restarting things that
fail with `init` and `inittab`, but that's not really the same
thing.
>> If one takes a step back and thinks about what such a service
>> management framework actually has to do, a few things pop out: managing
>> the processes that implement the service, including possibly running
>> commands both before the "main" process starts and possibly after
>> it ends. It must manage dependencies between services; arguably it
>> should manage the resources assigned to services.
>
>I feel like the pre / post commands should not be part of the system
>management ${PICK_YOUR_TERM}. Instead there should be a command (script
>or binary) that can be called to start / stop / restart / etc. a service
>and that it is the responsibility of that command (...) to run the pre
>and / or post commands related to the actual primary program executable.
>
>I feel like the traditional SysV / /etc/init.d scripts did the pre and /
>or post commands fairly well.
>
>What the SysV init system didn't do is manage dependencies. Instead
>that dependency management was offloaded to the system administrator.
What, lexiographical sorting of filenames isn't good enough for
you or something? :-)
>> So this suggests that it should expose some way to express
>> inter-service dependencies, presumably with some sort of
>> human-maintainable representation; it must support some sort of
>> inter-service scheduling to satisfy those dependencies; and it
>> must work with the operating system to enforce resource management
>> constraints.
>
>I'm okay with that in spirit. But I'm not okay with what I've witnessed
>execution of this. I've seen a service restart, when a HUP would
>suffice, cause multiple other things stop and restart because of the
>dependency configuration.
>
>Yes, things like a web server and an email server probably really do
>need networking. But that doesn't mean that they need the primary
>Ethernet interface to be up/up. The loopback / localhost and other
>Ethernet interfaces are probably more than sufficient to keep the
>servers happy while I re-configure the primary Ethernet interface.
That's an implementation detail, suggesting that the system was
either insufficiently rich to capture that sort of dependency,
or improperly configured so as to be too strict.
>> But what else should it do? Does it necessarily need to handle
>> logging, or network interface management, or provide name or time
>> services? Probably not.
>
>I think almost certainly not. Or more specifically I think that -- what
>I colloquially call -- an init system should keep it's bits off name
>resolution and network interface management.
>
>> SMF didn't do all of that (it did sort of support logging, but not
>> the rest), and that was fine.
>
>The only bits of logging that I've seen in association with SMF was
>logging of SMF's processing of starting / stopping / etc. services. The
>rest of the logging was handled by the standard system logging daemon.
It depends on the service. Most only log that the service was
started/stopped, but some have more verbose logging. By default
a method's standard output and error are connected to the
per-instance log file. See svc.startd(8) for details.
>> And is it enough? I would argue not really, and this is really the
>> issue with the big monolithic approach that something like systemd
>> takes. What does it mean for each and every service to be "up"?
>> Is systemd able to express that sufficiently richly in all cases?
>> How does one express the sorts of probes that would be used to test,
>> anyway?
>
>I would argue that this is a status / ping operation that a venerable
>init script should provide and manage.
>
>If the system management framework wants to periodically call the init
>script to check the status of the process, fine. Let the service's init
>script manage what tests are done and how to do them. The service's
>init script almost certainly knows more about the service than a generic
>init / service lifecycle manager thing.
Delegation to some service-specific component seems like the
most general approach. Notably, this is something where systemd
doesn't do a great job.
>I feel like there are many layering violations in the pursuit of service
>lifecycle manager.
>
>Here's a thought, have a separate system that does monitoring / health
>checks of things and have it report it's findings and possibly try to
>restart the unhealthy service using the init / SMF / etc. system in the
>event that is necessary.
>
>Multiple sub-systems should work in concert with each other. No single
>subsystem should try to do multiple subsystems jobs.
An issue here is the implementation of Unix (which, given that
this is a VMS newsgroup, I do feel compelled to say may not be
the light and the way that some people think that it is). You
have things like process exit status reporting that works with
process hierarchies, but less so across those. E.g., you can't
`waitpid` for a process that isn't your own descendent. Which
in turn implies that for maintaining state of whether a thing
is running or not, a separate service presents other
complications. There have been some patches to do this in Linux
but it's not clear that they made it into the kernel, and they
are not portable regardless. One can play games with ptrace,
but it's all a bit hacky.
>> The counter that things like NTP can drag in big dependencies that
>> aren't needed (for something that's arguably table stakes, like
>> time) feels valid, but that's more of an indictment of the upstream
>> NTP projects, rather than justification for building it all into
>> a monolith.
>
>+10
>
>> Anyway. I can get behind the idea that modern service management
>> is essential for server operation. But it doesn't follow that the
>> expression of that concept in systemd is a great example of how to
>> do it.
>
>+1
- Dan C.
More information about the Info-vax
mailing list