It's not that it's that much faster at the connection speed, it's so much better at responding to hundreds of simultaneous requests at once.
The interesting thing is that postgresql becomes faster as the first 20 or 40 simos come online, since it can share all the accessed data in a large common cache while running.
Note that individual performance is of course dropping, but aggregate performance is increasing in that range. After 40 or 60 simos, aggregate performance slowly drops til memory starvation, I/O starvation, or kernel handle / resource starvation.
Due to it's multi-process design, it is much more robust should a backend fail, since only that backend is killed, then all buffers flushed. I.e. only that one page has an issue, not the whole server.
This design makes postgresql pretty fast and incredible robust on Linux, but is runs pretty slowly on slowalrus and other multi-thread oriented OSes.
If you have your db server and web servers setup properly you can use persistant connections to get about 100k to 1M connects a second. Use regular connections and that drops to 1k to 10k connections a second.
Neither are a real bottleneck, and I find that NOT using persistant connects results in fewer backends running, and postgresql therefore responding faster because there are fewer backends to keep track of. I think that if I had more memory on my big box at work persistant connects might work better, but haven't had a chance to test that theory.
I'm building 7.1.2 right now to do some synthetic benchmarks, and I'll post the numbers when it's built.
Note that we found a problem in 7.1.2 that when indexing words, it's dog slow compared to 7.0.x Gotta make a bug report this week...