[Info-vax] Apache + mod_php performance

Dan Cross cross at spitfire.i.gajendra.net
Mon Sep 30 08:56:35 EDT 2024


In article <66f8a44c$0$716$14726298 at news.sunsite.dk>,
Arne Vajhøj  <arne at vajhoej.dk> wrote:
>On 9/28/2024 10:52 AM, Arne Vajhøj wrote:
>> On 9/27/2024 8:07 PM, Arne Vajhøj wrote:
>>> And we have a solution.
>>>
>>> httpd.conf
>>>
>>> KeepAlive On
>>> ->
>>> KeepAlive Off
>>>
>>> And numbers improve dramatically.
>>>
>>> nop.txt 281 req/sec
>>> nop.php 176 req/sec
>>> real PHP no db con pool 94 req/sec
>>> real PHP db con pool 103 req/sec
>>>
>>> Numbers are not great, but within acceptable.
>>>
>>> It is a bug in the code.
>>>
>>> Comment in httpd.conf say:
>>>
>>> # KeepAlive: Whether or not to allow persistent connections (more than
>>> # one request per connection). Set to "Off" to deactivate.
>>>
>>> It does not say that it will reduce throughput to 1/10'th if on.
>> 
>> Note that the problem may not impact anyone in
>> the real world.
>> 
>> I am simulating thousands of independent users using keep alive
>> with a single simulator not using keep alive.
>> 
>> It could very well be the case that the problem only arise for
>> the simulator and not for the real users.
>> 
>> Still weird though.
>
>Another update.
>
>Client side can also impact keep alive.
>
>HTTP 1.0 : no problem
>HTTP 1.1 with "Connection: close" header : no problem
>HTTP 1.1 without "Connection: close" header : problem
>
>Server side:
>
>KeepAlive On -> Off
>
>solves the problem. But obviously has the drawback of loosing
>keep alive capability.

Well ... yes.  That's how the protocol works.  Keep-alive is the
default with HTTP/1.1 unless you explicitly send
`Connection: close`.  See RFC 9112, section 9.3 for details.

>Not a disaster. Back in the early 00's when prefork MPM was
>common, then KeepAlive Off was sometimes suggested for high
>volume sites. But inconvenient.
>
>With KeepAlive On then we have a performance problem.

Actually, sounds like the bug is in your client, which expects
behavior at odds with that specified in the RFC.

>The cause is that worker processes are unavailable while
>waiting for next request from client even though client is
>long gone.
>
>That indicates that the cap is:
>
>max throughput (req/sec) = MaxClients / KeepAliveTimeout
>
>The formula holds for low resulting throughput but it does
>not scale and seems to be more like 1/3 of that for higher
>resulting throughput.
>
>But if one wants keep alive enabled, then it is something one
>can work with.
>
>My experiments indicate that:
>
>KeepAlive On
>KeepAliveTimeout 15 -> 1
>MaxSpareServers 50 -> 300
>MaxClients 150 -> 300
>
>is almost acceptable.
>
>nop.txt : 100 req/sec
>
>And 1 second should be more than enough for a browser to request
>additional assets within a static HTML page.
>
>But having hundreds of processes each using 25 MB for serving a 2 byte
>file at such a low throughput is ridiculous.
>
>OSU (or WASD) still seems as a better option.

See above.  Looks like the problem ended up being between the
keyboard and the chair.

	- Dan C.



More information about the Info-vax mailing list