I'm just wondering how much [un]lucky we would be to generate two identical numbers with simultaneous calls to:

uniqid( "", rand() );

The first part of uniqid seems to be based on time. This alone makes it pretty hard to generate twice the same number with sequential calls on a single computer.

The lcg value seems to make it near impossible to generate twice the same value with simultaneous calls on the same computer. You'd have to have calls happen exactly at the same microsecond and generate the same lcg value on top of that. You need a really bad random number generator and lots of simultaneous calls to run the chance to have collisions.

Of course I'm going to put a unique index on the column in which I'll put uniqid values. I'm still wondering how much likely it is to encounter a collision.

I may use mt_rand also to make the lcg better.

What are your thoughts on this subject? What is your experience with uniqid()? Is my reasoning too optimistic?

Thanks for your valuable input!

    uniqid exists for the purpose of giving unique IDs, so if it doesn't do a good job of that then it is by definition a bug in PHP. It's always possible that it could generate two identical IDs, but other problems are probably more likely (like your server catching fire, or your userbase dying of mad cow disease, or something). There are no guarantees in life but uniqid should be pretty safe.

    Note that "unique" and "unpredictable" are different requirements. If you need an unpredictable ID the uniqid page suggests running the result through MD5. I suspect that may not be cryptographically secure though it should be perfectly acceptable for most purposes.

    Is there some reason you can't use an auto-incrementing field in your database? That is guaranteed to be unique, even if your server is on fire and your users are dying of mad cow disesase.

      I am currently using auto increments but I want to replace them. I find auto increments unpractical when running batch database updates : all users have to log out before the update is run. Unique strings can avoid this down time.

      I never run a database update directly on my production db. I generate sql queries from a test database. When I'm ready I execute the sql I generated into the production database. Auto increment does not allow to do this smoothly, and I consider this a flaw.

      My db isn't holding critical data and I could very well continue to live with auto increments. But it goes against my religion. I like to develop methods that are as general as possible. I could be working on a "more serious" project next. What I learn now can be applied then.

      Thanks for the reply! Maybe I'll have a look at php's source. I am really curious on how uniqid() is implemented.

        Basically it's the time in microseconds (and written in hex); if you use the lcg parameter, it runs a pseudorandom number generators with a period of (231-85)*(231-249) (formed by blending two other PRNGs, one seeded with the time, the other with the process/thread ID).

          Write a Reply...