i found out about this after testing a database with images then expanding it to about 100,000 rows and having a page take 10+ seconds to render.
create:
create table test (id int,data text);
populate:
for($i=0;$i<100000;$i++)
{
query($db,"insert into test set id = $i,data='test data'");
}
to test:
echo microtime(),"\n";
$big_results = mysql_db_query("db_prod","select * from test");
$big_row_num = mysql_num_rows($big_results);
echo microtime(),"\n";
$small_results = mysql_db_query("db_prod","select count(id) from test");
$small_row = mysql_fetch_array($small_results);
echo microtime(),"\n";
on my test machine the results are:
980497895.76506700 (script is starting)
980497896.42144000 (after mysql_num_rows)
980497896.52827700 (and after using count())
so it took 0.7 seconds for the mysql_num_rows method and a bit over 0.1 seconds for count(). while this may not sounds like much, try having images in your database (more to select) and try to access these pages on a system thats almost always under serious load. in my quick test i noticed that count() took about 0.1 seconds regardless of the amount of data but with just a couple of bytes more the mysql_num_rows method was noticeably slower.
1) there may be a better way to measure the time a script takes besides microtime() but i ran the script about 10 times and the results were about the same.
2) anyone else seeing this occur?