Hi,
on a moderately busy site I'm doing permanent updates to a single file. That means for every page request that file is updated. Using a database is not an option here.
Very rarely I run into collisions, leading to a corrupted file, so I'm looking into a collision-save way to do this.
What the script does right now is this:
// $s_grouppath is the destination file
$s_tmppath = $s_grouppath . '.tmp';
// check whether there's a tmpfile lying around and wait until it disappeares
$i = 0;
while (is_file($s_tmppath)) {
$i++;
usleep(100);
if ($i > 9 && is_file($s_tmppath)) {
// waited a second, remove the file
@unlink($s_tmppath);
} // end: if
clearstatcache();
} // end: while
// write to the file [...]
@copy($s_tmppath, $s_grouppath);
@unlink($s_tmppath);
What I believe to be happening here is a race condition between the while-loop and the writing of the file.
Any suggestion for improvements?
A while ago the flock() function proved unrelieable but I haven't been looking into it for several years. Have the problems been solved? Are there other suggestions?
Thanks in advance,
Dominique