Note that one of the biggest problems for postgres is code that people port from simpler dbs like mysql and they don't use the features of postgresql that would make it faster, like btree indexes, sub selects, inner joins, stored procedures, and rules.
Instead, they implement these as code and multiple queries. The big problem with postgres seems to be a much higher cost per query, so anything written to hit the database ten times is usually gonna be slower than if you only hit it once, even if the once is a much bigger query.
Since queries that use rules, stored procedures, and aliased inner joins, transactions and large obects can be hard to write, many folks never bother, and never see the performance increases.
Also note that postgres uses a multiple back end design that ensures better scalability on SMP boxes than mysql's single threaded design. While any one query may not be too fast on a 16 way ultra sparc, the aggregate performance will quickly outperform a mysql database in a real world environment like an ecommerce system, where you may need to handle a hundred simultaneous connections to your database.
Postgres is also capable of putting data for individual databases in different directories so that you aren't I/O bound on large table apps by having them all in one area.
By the way, version 7.0.2 of postgres is out and I'd highly recommend it, noticeable faster, and a couple of really annoying bugs relating to table deletion have been fixed.