[Info-vax] openvms and xterm
Grant Taylor
gtaylor at tnetconsulting.net
Mon Apr 22 19:53:26 EDT 2024
On 4/21/24 21:37, Dan Cross wrote:
> Sendmail.cf was hardly typical of most Unix configuration files,
I'll argue that sendmail.cf or sendmail.mc aren't as much configuration
files as they are source code for a domain specific language used to
impart configuration on the sendmail binary. In some ways it's closer
to modifying a Makefile / header file for a program than a more typical
configuration file.
> You may have a point, but to suggest that anyone who objects to systemd
> doesn't "have an argument" or is reactionarily change averse is going
> too far. There are valid arguments against systemd, in particular.
Agreed.
> I'll concede that modern Unix systems (including Linux systems), that
> work in terms of services, need a robust service management subsystem.
For the sake of discussion, please explain why traditional SysV init
scripts aren't a service management subsystem / facility / etc.
> If one takes a step back and thinks about what such a service
> management framework actually has to do, a few things pop out: managing
> the processes that implement the service, including possibly running
> commands both before the "main" process starts and possibly after
> it ends. It must manage dependencies between services; arguably it
> should manage the resources assigned to services.
I feel like the pre / post commands should not be part of the system
management ${PICK_YOUR_TERM}. Instead there should be a command (script
or binary) that can be called to start / stop / restart / etc. a service
and that it is the responsibility of that command (...) to run the pre
and / or post commands related to the actual primary program executable.
I feel like the traditional SysV / /etc/init.d scripts did the pre and /
or post commands fairly well.
What the SysV init system didn't do is manage dependencies. Instead
that dependency management was offloaded to the system administrator.
> So this suggests that it should expose some way to express
> inter-service dependencies, presumably with some sort of
> human-maintainable representation; it must support some sort of
> inter-service scheduling to satisfy those dependencies; and it
> must work with the operating system to enforce resource management
> constraints.
I'm okay with that in spirit. But I'm not okay with what I've witnessed
execution of this. I've seen a service restart, when a HUP would
suffice, cause multiple other things stop and restart because of the
dependency configuration.
Yes, things like a web server and an email server probably really do
need networking. But that doesn't mean that they need the primary
Ethernet interface to be up/up. The loopback / localhost and other
Ethernet interfaces are probably more than sufficient to keep the
servers happy while I re-configure the primary Ethernet interface.
> But what else should it do? Does it necessarily need to handle
> logging, or network interface management, or provide name or time
> services? Probably not.
I think almost certainly not. Or more specifically I think that -- what
I colloquially call -- an init system should keep it's bits off name
resolution and network interface management.
> SMF didn't do all of that (it did sort of support logging, but not
> the rest), and that was fine.
The only bits of logging that I've seen in association with SMF was
logging of SMF's processing of starting / stopping / etc. services. The
rest of the logging was handled by the standard system logging daemon.
> And is it enough? I would argue not really, and this is really the
> issue with the big monolithic approach that something like systemd
> takes. What does it mean for each and every service to be "up"?
> Is systemd able to express that sufficiently richly in all cases?
> How does one express the sorts of probes that would be used to test,
> anyway?
I would argue that this is a status / ping operation that a venerable
init script should provide and manage.
If the system management framework wants to periodically call the init
script to check the status of the process, fine. Let the service's init
script manage what tests are done and how to do them. The service's
init script almost certainly knows more about the service than a generic
init / service lifecycle manager thing.
I feel like there are many layering violations in the pursuit of service
lifecycle manager.
Here's a thought, have a separate system that does monitoring / health
checks of things and have it report it's findings and possibly try to
restart the unhealthy service using the init / SMF / etc. system in the
event that is necessary.
Multiple sub-systems should work in concert with each other. No single
subsystem should try to do multiple subsystems jobs.
> The counter that things like NTP can drag in big dependencies that
> aren't needed (for something that's arguably table stakes, like
> time) feels valid, but that's more of an indictment of the upstream
> NTP projects, rather than justification for building it all into
> a monolith.
+10
> Anyway. I can get behind the idea that modern service management
> is essential for server operation. But it doesn't follow that the
> expression of that concept in systemd is a great example of how to
> do it.
+1
--
Grant. . . .
More information about the Info-vax
mailing list