How big are the files where it fails? (This way, I can run a similar test on my machine).
Before I test, let me ask, is it possible to process the file one line at a time instead of putting the entire file into an array? For example, if you are outputting the file to the screen, just print or echo the data instead of storing it in an array. If you are scanning the text looking for some value, only add the line to the array if the line is going to be useful or meaningful later. If you are converting the file in some way, make the conversion on that one line and then write out that line (either to the screen or another text file).
I had this problem once on very large files (10-20M😎 and it was taking 20-30 minutes to process. What I did was this:
- Set the max execution time to 30 mins
- Write a message to a table that said "processing"
- Print a meta refresh on the page that forced the browser to jump to a second page.
- Start processing the large file
- When done, update the database table to say "done"
Then, in the 2nd file, I did this:
1. Check the database table to see if it said "done"
2. If yes, then print a meta refresh to the 3rd and final page that said "Done!!"
3. If no, then print a 5 second meta refresh to the 2nd page