Scott Marlowe's original answer was excellent, very precise, and everybody should read it several times to fully think it through.
The important thing to keep in mind is that identifying bottlenecks can be maddeningly difficult.
Let me give you a real-world example. I used to be editor of a major newspaper Web site that did 700,000 page views (complete pages, not just hits) per day. Probably half of those page views were individual news articles, which were built dynamically by a Perl script. (Perl, not mod_perl or FastCGI Perl.)
The Web server (actually, several redundant servers masquerading as one) was protected from net.morons by a firewall. The Sybase database was on a separate server behind yet another firewall.
Now, whenever Joe User wanted to read a story, the connection had to go through a firewall, to the Web server, which had to fire up Perl, which had to connect through another firewall and log in to Sybase, which had to process several SQL queries per article. The Perl script then would knit the results together with a template and return a page that included embedded image calls and multiple queries against a NetGravity ad server.
Let's suppose performance starts to stink. Where is the bottleneck? It could be:
- Web server CPU load from Perl startup.
- Web server memory usage.
- Firewall CPU load and/or memory (which one?)
- CPU or memory on the Sybase server.
- Overhead from logging in to Sybase.
- Sybase running out of connections. Did we remember to log out, or did we just let them expire?
- Badly written SQL.
- Oops. We dropped an index on a heavily searched table.
- Some net.moron is flooding us with a portscan.
- NetGravity might be having a bad day.
- The Internet connection. Let's not forget that.
I've listed eleven possibilities and there are probably several dozen more.
One, and only one, would be helped by persistent connections, and as Scott pointed out, the cure very well might create a different bottleneck on its own.
I'm not saying persistent connections are bad -- not at all! Just that determining whether they are going to be beneficial is very, very tricky.
By the way, in the long run the solution to the problem involved more CPU, more memory, more Internet connectivity, firewall upgrades, persistent database connectivity, upgrades to the ad server hardware and software, and temporary filesystem-based caching of data from the database.
All of them.
Every bottleneck that was eliminated led to a different bottleneck -- or, in some cases, might even create a new bottleneck. Like I said, this is maddeningly difficult, and there are no simple right/wrong answers.