The real answer is, of course, a question. How large on average is the count gonna be.
If you've got a table where you wannt do "select count(*) from table" with no where clause, then it'll be slow. If you do a lot of "select state, count(state) from table group by state" then indexes should work /be useful there.
So, it's not so much the number of rows in the table, though that's important, it's how many rows you'll be returning to the aggregate, since it technically will need to go across the number.
The second query up there I listed (the one for states) is actually very fast, even on a large table, as long as there aren't a lot of unique entries in the field (only 50 states...) the only "data" going into the aggregate should the index on states. The index is much smaller, and therefore much faster than a seq scan on the table.
In Postgresql using explain analyze will show you the number of rows different parts of the query will throw around, and their width. It helps a lot to see those numbers inside the query of how much has to move from one part to another.