I'm having a performance problem appending text to a text file on a server. I have 2 scripts, which almost identical, like this:
<?php
$logfile = "D:/tmp/problem.log";
echo $logfile."\n";
$data = "#########@###############";
echo date("Y-m-d H:i:s")."\n";
for($i=0; $i<=1000; $i++)
{
if($i%100==0) echo "$i ";
error_log("$recipient\n", 3, $logfile);
}
echo "\n".date("Y-m-d H:i:s")."\n";
?>
Problem.log is a pre-existing 2 MB text file, of about 70K lines. I tried to run that script and it took 4:47 minutes to finish. Which is, too slow considering the hardware spec.
While the other script writes into a new, previously non-existing file. And that took just 9 seconds to finish.
This does not make sense to me, because I thought appending of text into text file shouldn't be affected by the size of the file being appended.
Some other facts:
- I tried to write on another 10MB file, and it took just a few seconds longer than writing into non-existing file.
- This does not happen to other 2MB file
- Tried to run the script to append that problem.log on another computer, which is only a Pentium 4, SATA (no RAID), and took just less than 2 seconds to finish.
I've tried to use fopen, fwrite, and fclose instead of error_log, where the fopen and fclose are outside of the "for" loop, and it actually solves the performance problem. But I'm more interested in knowing what is wrong about the performance of writing that particular 2 MB file.
Anybody can give answer or opinion ?
If you need any other information, let me know and I can find out.
Thank you!