hello.
i'm coding a web community software (profiles/comments site) and am having some trouble with it on the (dedicated) server i'm running on.
this particular server slowly chews up more and more swap space the longer it stays up. eventually it will get up into a couple hundred MB of swap, at which point iowait shoots through the roof if the server is put under the slightest extra bit of pressure.
when these episodes happen, the server becomes severely bogged down and apache has to be restarted--at which point the swap file shrinks back down to something reasonable and everything seems fine again.
now this might seem an odd question to ask, but i've gone through my apache and mysql configs enough to feel fairly certain that these services should not be consuming much, if any, swap space. no other services are terribly sigificant to this server's operations, soo, i'll get to my weird question...
i'm running the ioncube php accelerator (sometimes called phpa) which i know does some caching of php in memory. and in order to reduce the load on mysql, i have some flat text files set up which cache comments, etc. the thing is, i'm writing my cache files as .php files that contain arrays. so then when i need a certain user's cache file, i simply call it with an include_once statement.
my code only calls a few include files per run before it exits, but there are literally thousands of these small cache files (corresponding to thousands of users), any of which may be called per execution of the code.
so what i'm asking has two parts to it:
A) Under normal circumstances (without the ioncube accelerator), would including different files each time a certain piece of code is run eventually start to hog up memory?
😎 If some kind of php caching is being used, would this make a difference?
I could read in csv files or something instead of includes but doing it this way makes things easier for what I'm doing.
An answer to either A or B or any suggestions in general would be greatly appreciated.
eon