[Info-vax] New OpenSSL update from HP
David Froble
davef at tsoft-inc.com
Tue Jun 16 10:35:35 EDT 2015
Simon Clubley wrote:
> On 2015-06-15, David Froble <davef at tsoft-inc.com> wrote:
>> Simon Clubley wrote:
>>> On 2015-06-15, David Froble <davef at tsoft-inc.com> wrote:
>>>> Ok, I've never written a web server, yet. If I was to do so, I'd
>>>> seriously look at detached processes rather than sub-processes. Why?
>>>> Because once you assign the worker process a task, perhaps you'd want it
>>>> to complete, regardless of whatever the web server does.
>>>>
>>> What happens if you need to shutdown/restart the web server and need
>>> to _guarantee_ that all related processes have terminated as part of
>>> of the shutdown process ?
>>>
>> Couple of issues there.
>>
>> First, a worker process lifetime should be measured in fractions of a
>> second, or a couple seconds at most.
>>
>> The listener could keep track of the PIDs that it created, and a
>> shutdown command could terminate them in the chosen manner.
>>
>> That said, it's been my impression that web server transactions are
>> stateless, (if I understand that term), and usually occur in a rather
>> short time period, such as a fraction of a second. Do you have examples
>> of when a worker process would remain active for an extended period of
>> time? I'm curious.
>>
>
> I was responding to the specific technical issues around your desire
> to make them detached processes instead of subprocesses.
>
> Also, connections remain open by default after the initial request has
> been completed; these are called persistent connections. See:
>
> https://en.wikipedia.org/wiki/HTTP_persistent_connection
>
> In order to implement this functionality, your detached process would
> have to remain around after the initial request has completed and they
> would not automatically die (unlike with subprocesses) if the parent
> web server process croaked due to some internal bug.
>
> Simon.
>
I don't use persistent connections, so don't think about them much.
However, if I had a persistent connection, performing some task, and
with the connection remaining up, then I'd have to ask, what does the
webserver (or whatever) that initially set up the task have to do with
the task completing it's job? I'd argue that you might want the task to
run to completion, regardless of what the web server (or whatever) is
doing. I'll also admit that the design of each application may have
different requirements.
Now, if a particular application had the requirement that if a "master"
process failed then all worker processes must exit, regardless of the
state of any transactions they might be working on, then there is more
than one way of accomplishing that. For example, keeping a list of
assigned or active PIDs and STOP/ID= can be a rather large hammer, one
that even handles non-responsive processes.
As for why detached processes might be better, consider the application
design where once a task is assigned, you might need it to run to
completion, regardless of whatever happens to whatever started the task?
In that case, as you indicate, a sub-process does not have such a guarantee.
One size doesn't fit all.
AS I have been designing and implementing web services for some time
now, I think I have a decent understanding of the issues.
More information about the Info-vax
mailing list