check into using a two part index and using explain to see if your query planner can use it. Depends on how your data are layed out.
OK, then you might appreciate this. Let's say you've got a REAL sparse set. Say, 10,000,000 entries, and you're only gonna grab a handful at a time. In this instance, the MVCC table / database will provide adequate, but not lightening fast, response. Generally in the milliseconds.
But as the number goes up you'll start to see the difference in either postgresql or any other MVCC system that has to visit the actual table for every row to determine visibility. At some point, the MVCC system will do better just switching to a sequential scan, especially if you're reading most of the table anyway.
A small record size, like say < 100 bytes or so, can fit into a block (postgresql default is 8k) many many times. Enough that if you're hitting 1.25% of the table's data, it's likely gonna hit every block anyway. And, if you're only hitting 10 to 20% of all your blocks, then it's still faster to grab them all as you're gonna have to work harder to hit the index AND the table.
As the table gets wider, the crossover point needs to move. If a row is 500 bytes, you can only fit about 16 records in a block. It takes a 6% hit rate now to hit every block. Rows aren't often much bigger than that, 1k max for most things. pgsql kicks out big textfields to alternate containers so if they're not needed they don't get in the way.
Anyway, with myisam tables, the data you want is pretty much ALWAYS in the index, so there's no decision on when to switch to using the table versus the index there, if you're only reading indexed things.
So, to compare the two, the postgresql index is always a pointer to the real data, and sometimes redundant, since you'll have to hit the data that's being pointed to eventually, while the myisam index is the actual data, just as much as what's in the table.
So, the price to pay with pgsql and (probably innodb too) means we'll get the best performance when we're scanning very small portions of larger tables, or all of much smaller tables. But hitting most of a wide table, means indexes are useless for reads.
myisam allows us to get all our data from indexes. Think of it this way. In myisam we could index everything the way we'd always need it and throw away the tables (not indexes, what a goof). For smaller sets out of much larger sets, it's a bit faster. For larger amounts of those sets, it flies.
But there's a price to be paid. Myisam's use of the table locking model means it's not too good at handling concurrent updates. If you need to update 30,000 rows every 15 minutes you might experience some update problems.
So, as you increase your speed of updates / inserts WHILE maintaining reasonable read rates, MVCC (innodb or pgsql or firebird or oracle) will shine. Put simply, it handles concurrency better.
That's why content management systems, with high read to write ratios run very well on myisam. And why highly interactive, busily updated productivit applications run better n MVCC.