sneakyimp wrote:Are you suggesting I just alter the configuration to write a new access log?
Yes, that's exactly what I'm suggesting- writing another log in parallel with your existing access logs /error logs.
I've done it loads of times, I normally use e.g.:
LogFormat "%h %t \"%r\" %>s %D" time
CustomLog logs/time_log time
Here %D = microseconds - this is the important bit.
Obviously writing another log introduces some drag and would probably not be advisable on a production machine.
Writing one line on one file which is already open will not introduce a major load on a server running PHP. Even running "Hello, World" in PHP is quite intensive and involves opening several files for each request.
What do you think of my original suggestion...adding that bit of PHP to my application which writes a DB table on really slow pages?
That is going to involve more code AND add more load than using Apache.
I'm nowhere near 100 queries per page (which sounds like a lot! 100??!).
That's good to hear. I don't recommend that you enable all query logging on a production box - that's a disc killer.
does this look bad?
total used free shared buffers cached
Mem: 765520 716512 49008 0 96160 315376
-/+ buffers/cache: 304976 460544
Swap: 1052248 101744 950504
Not really. You've got 768M of ram, I assume, and about 100M of swap used, which is nothing really to write home about. The amount of "free" RAM is irrelevant seeing as Linux usually uses most of the free ram for disc caching.
I'm looking into your MaxClients and MPM suggestions but:
* does limiting MaxClients mean people will get turned away? That would be bad.
In principle, yes. But in practice, no. Remember that a web browser is only a client while it's loading the page or during keepalive (which is determined by your maximum keepalive setting in "KeepAliveTimeout"). Otherwise, just looking at an already loaded page doesn't count.
Once the MaxClients limit is reached, the OS will queue further connections and send them to Apache as slots become available.
Once the OS queue reaches its limit, further connections will lag, being retried on the client OS, until it eventually times out.
Believe it or not, this is a far, far better scenario than the server running out of virtual memory 🙂
* there's a dire warning HERE about PHP and threaded MPM. The whole concept of 'thread-safe' is new territory for me. I suppose I'll have to start another thread for that 🙁
I'm well aware of this dire warning, and am extremely pissed off that win32 users can have an allegedly stable threaded version of PHP, but Unix users can't (Remember that win32 doesn't have prefork mode). In fact I started a thread about it.
I ran PHP 5.1.2 with threaded (worker) Apache for many months with no problems, however I've now upgraded to 5.2.0 and am hitting thread-related issued, so I've switched back to prefork.
In any case, prefork is not a problem it just uses (much) more memory.
Don't set MaxClients too high- it is counter-productive and may cause your machine to run out of memory. It's better to have clients queuing for a free worker than your kernel swapping like mad to give the workers enough RAM.
If you have a look at the "server status" screen (Provided by Apache's mod_status) during a period of heavy(ish) traffic, you'll see how many workers are used at a given time. You'll probably be surprised at how low it is.
You may have 300k hits per day on your site, but you'll proably find that there are moderately long periods where none happen for many seconds or minutes (i.e. they're all bunched up)
Mark