Thanks for your response, Weedpacket.
Weedpacket;11050095 wrote:I note that uniqid does allow specifying an additional entropy source.
I don't think [man]uniqid[/man] lets you specify your entropy source but rather lets you specify TRUE for the second param if you want more entropy form whatever source it is derived from. My question really is whether it is poor security practice to reveal the output of uniqid to visitors? Seems to me it might give shrewd hackers more insight into the entropy generation methods of your server.
Also, asking for more entropy from uniqid results in a 23-char output instead of a 13-char output, which would be more difficult to communicate over the phone. It is my hope to generate the shortest, most easily readable random unique ID that I can. This unique ID will be used for all log entries entered in connection with a single page request. It will be stored in user_friendly_id in this table:
CREATE TABLE `db_log` (
`id` int(15) unsigned NOT NULL AUTO_INCREMENT,
`creation_microtime` decimal(16,5) unsigned NOT NULL,
`user_friendly_id` varchar(13) NOT NULL,
`severity` enum('INFO','WARNING','DANGER') NOT NULL,
`log_entry_type` varchar(20) NOT NULL,
`log_entry` text NOT NULL,
`related_db_table` varchar(45) DEFAULT NULL,
`related_db_id` int(15) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `log_entry_type` (`log_entry_type`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
log_entry_type is another free-form field that can be used to categorize log entries further. E.g., they might be of type PAYMENT or perhaps CDN_API problems or FILE_UPLOAD problems.
Weedpacket;11050095 wrote:Depends on how the DBMS stores variable-length fields; MySQL InnoDB's "Barracuda" store variable-length fields separately from the rest of the table and has the option of compressing them. PostgreSQL's TOAST also does this, and also has the option of storing the data within the main table if it is short enough; it's also worth noting that PostgreSQL's character types are all stored the same way regardless of their semantics.
Helpful info, although I'm wondering how many records I might expect to have in my db_log table before it starts to slow things down if the db is hosted form a standard hard drive. A single page request might want to enter 3-5 entries if it runs into some trouble that is log-worthy.
Weedpacket;11050095 wrote:If you've got a consistent set of error messages that can be identified by an ID number, then you can store that (possibly with additional columns for variable parts of the error string) along with a lookup table that matches ID to error message text. This would make searching for particular classes of error easier.
My two classification columns in this table are user_friendly_id, which will identify all errors that might be encountered with a single page request, and then log_entry_type, which can be used for whatever other classification scheme we devs might want.