[Info-vax] Apache + mod_php performance
Arne Vajhøj
arne at vajhoej.dk
Wed Oct 2 12:58:46 EDT 2024
On 10/2/2024 12:25 PM, Dan Cross wrote:
> In article <vdjpps$fk2$2 at reader1.panix.com>,
> Dan Cross <cross at spitfire.i.gajendra.net> wrote:
>> In article <vdjoui$37f8q$4 at dont-email.me>,
>> Arne Vajhøj <arne at vajhoej.dk> wrote:
>>> On 10/2/2024 11:20 AM, Arne Vajhøj wrote:
>>>> On 10/2/2024 11:07 AM, Dan Cross wrote:
>>>>> In article <vdjmq4$37f8q$3 at dont-email.me>,
>>>>> Arne Vajhøj <arne at vajhoej.dk> wrote:
>>>>>> On 10/2/2024 10:47 AM, Dan Cross wrote:
>>>>>>> [snip]
>>>>>>> You do not seem to understand how this is qualitatively
>>>>>>> different from your test program not sending `Connection: close`
>>>>>>> with its single request per connection, and then blocking until
>>>>>>> the server times it out.
>>>>>>
>>>>>> It is qualitative different from what you are imaging.
>>>>>>
>>>>>> The client does not block until the server times out.
>>>>>
>>>>> So what, exactly, does it do?
>>>>
>>>> It moves on to next request.
>>>>
>>>> That request will block if the server can't serve it
>>>> because all processes are busy.
>>>>
>>>> > And what is the "problem" that
>>>> > you are imagining here? Please be specific.
>>>>
>>>> Go back to the first post in the thread.
>>>>
>>>> The numbers for Apache are low. Much lower than
>>>> for other servers.
>>>
>>> And the numbers are low due to keep alive.
>>>
>>> Basically Apache on VMS keep an entire process around for
>>> a kept alive connection.
>>>
>>> When Apache configuration does not allow more
>>> processes to start then new requests get queued
>>> until keep alive starts to timeout and processes
>>> free up.
>>>
>>> And one can not just increase number of processes
>>> allowed because they use 25 MB each. The system
>>> runs out of memory/pagefile fast.
>>>
>>> An it does not help that if Apache kills some
>>> processes then it is expensive to start a new one again,
>>> which means that either the large number of memory
>>> consuming processes are kept around or Apache
>>> will be slow to adjust to increasing load.
>>
>> These are all claims not supported by the _actual_ evidence that
>> you've posted here. While your argument is plausible on the
>> face of it, how did you arrive at this conclusion?
>>
>> Post more details about your setup and experiments.
>
> Let's dig a little deeper here and show that Arne's pro blem
> is not specific to VMS. Indeed, I can replicate something more
> or less like the results he showed on FreeBSD.
>
> I'm using "seige", which is another testing tool; here, I can
> force HTTP/1.1 and also enable keep-alive via options in its
> configuration file. With 25 worker threads 1000 queries each,
> I easily saturate the number of workers and hang waiting for
> timeouts, tanking throughput.
>
> So...Not a VMS problem at all.
The basic mechanism is not VMS specific at all.
I assume that it is inherent in prefork MPM on all
platforms and other servers that use a similar
worker process model.
As I have noted a couple of times then back when
prefork MPM was common (20 years ago) then the question
about whether to have keep alive on or off was
often discussed.
The problem does not seem to impact newer designs
using threads. They obviously still need to keep
the connection open, but I guess they do some
select/poll/epoll/whatever to detect when there is a
new request to keep resource usage minimal.
But the mechanism hits VMS harder than other platforms.
The *nix fork is way more efficient than SYS$CREPRC for
creating those hundreds or thousands of worker processes.
We can have fewer worker processes on VMS and it creates
longer delay to start them up.
As described above.
Arne
More information about the Info-vax
mailing list