$id = substr(base64_encode(time().uniqid(base_convert(mt_rand(), 10, 36), TRUE)), 0, 32);
Is the above overkill? Not random enough? Too many steps that reduce randomness?
I use both time() and uniqid() to reduce odds of a collision.
I use base64_encode() instead of md5() because md5 only gives you 0-9 and A-F ... and I then use substr() to chop it down to 32 characters for a consistent length.
Does the base64_encode() combined with substr() greatly reduce randomness?
I don't care about the theoretical re-issuing of a same hash after 12 years of use or something along those lines ... worst-case I can use a log file to store already issued ids/tokens to avoid re-issue. I just want to know where my algorithm stands.
The immediate purpose will be to identify files containing HTTP POST data and the id's will be passed via HTTP GET so that the post data can be retrieved after performing cross-domain session trickery (since post data doesn't seem to survive all the redirections). These files will have a lifespan of 5 minutes, tops.
However, in the future I may use the algorithm to key long-lasting order id's and whatnot, so being able to be unique/valid for years would be a bonus.