I just want to make a point about premature optimization. There's a big difference between following good design practice and premature optimization, and a lot of times people don't see the difference.
A good design practice is to design your app so that the fewest queries can do the most work in the database. I.e. don't do a select id from one table, then run a query for each id returned against another table. Instead join the tables together in one query and get all the data at once. Add obvious indexes on things you'll be selecting on.
Once your server is up and running, the next mistake often made is to wait until things start to break to start optimizing it. Once it's up and it works is the time to start analyzing how it's working.
It's rather easy to add some code to log the time different sections of different pages take to execute. You can even do things like put them in html comments, so you have this in each page:
<!-- header: running total: 2.35mS -->
more page html goes here...
<!-- First query: running total: 140.2 mS -->
display stuff, build a form, etc...
<!-- Page finished rendering, running total: 198.3mS -->
The cost of adding these to a page is pretty small, and you can always check some debug variable somewhere if you wanna turn them off.
Logging them, with the page name and / or get url to some centralized logger might help for analysis. Or, you can just run the pages by hand while the site's under some kind of nominal user load.
Lastly, a lot of work has gone into newer versions of PHP, Linux, Windows, MySQL, PostgreSQL, and the other tools you might be using. You don't have to run the latest bleeding edge, but let's face it, a modern machine running PHP 5.2, PostgreSQL 8.1, MySQL 5.0, A linux distro on a late model kernel (RHEL4, FC6, the latest stable debian), a late model BSD, or windows XP, is gonna run a heckuva a lot faster than the same machine running RedHat 5.2, PHP 3.0, PostgreSQL 7.2, MySQL 3.23, Windows NT, or an early flavor of BSD etc...
Configuring your database server to log long running queries, grabbing those queries, and running explain analyze on them (analyze is a postgresqlism) to find out why they're slow and how to either make the db run them faster or how to rewrite them is a vital step to maintaining good application performance.
My first web/db server was a Dual PPro200MHz with 256M of ran and 6x9Gig USCSI hard drives. That machine was fast back in its day. Before long, when we'd replaced it with a dual P-IV-2800MHz machine with 2 Gigs of ram, it became our testing and integration server. We caught problems fast because if the old machine couldn't run your app fast all by itself, the new server wasn't going to see it until you'd fixed it. We had about 150 custom applications on the main server, btw, so one bad apple could ruin everybody's day. I highly recommend developing on an underpowered server so you spot problems faster.