I have a system in place that seems to be working but I can predict that on large directories the current process will take longer but I can't think of a faster way to do this.
WHERE I WAS
I'm creating a CMS (content management script). I've initially developed the script to deal with MySQL databases but found that, over the course of 2 years of web site design, that I have a lot of small projects that I could easily deal with by placing the files into a directory and read the DIR like a database.
WHERE I AM
I've updated the CMS to deal with directories in (what I think is) a uncharted method.
When a query string comes across for a file listing, I:
1) create (IF NOT EXISTS) a table of 'dir' + the directory name (replace "/" with "_")
2) truncate the data (if the table already existed & had data)
3) retrieve all the files + info + some other stuff :-) and do 1 giant INSERT into the table
Once the data is in the table, I can query the file details with SQL query calls like:
Select * from dir_this_dir_is_really_deep where base RLIKE 'abc' and ext='gif' and size > '100' order by base asc limit 0,10
This part works GREAT! I now can template out file listings.
However, (and more so the reason I have to truncate and repopulate every time), files in the directory will change so repopulating is nessecary.
WHAT I AM LOOKING FOR
I've benchmarked the process and the only bottleneck I see is the disk access time.
10 files ~ .05 seconds
100 files ~ .5 seconds
1000 files ~ 2 seconds
I'm curious if anybody has thought about this and had any input. So far everything is working great but, since I code alone, I rather bounce this off another 'grammer.
TIA