And here it is, the long awaited input :-)
The simplest way to check if your current setup can take a beating is to beat it.
Setup a few client systems and run programs like apache's AB or some other load-simulating tool and hit the server with a full-frontal assault, far exceeding the load you hope to get.
Bottlenecks will show up in notime.
Meanwhile, here are a few pointers from the high-load survival guide.
1. Get rid of as many 'useless' dynamic things as possible. Yes, I mean the images in the database. Your webserver is full of caching thingies that speed up the loading of filesystem data, which does not work for database data, so your images will allways be retreived from the database.
If you have to deal with a lot of database actions, get yourself a big database server. There is not such thing as "big enough". In particular, give it loads of memory, so it can keep all its tables in memory. (memory is much faster than HD)
If your database server is not too small (remember, big enough doesn't exist) it should be fast enough (tricky word us here) to allow lots of small queries.
Database servers also have cache, so if you ask the same question often, the answer comes very quickly.
Design your database. Take time to think about what you are doing, and does it all make sense? Use the "explain" command in SQL to see just how your database is handling your queries. Try to make it so that there are as few sequencial scans as possible. (a sequence scan means the database could not use any index to help it find the data, so it had to go through all the data to find what it was looking for). I short: build indexes, and build them wisely.
Tim has posted a few articles about proper database design, read them.
Locking. MySQL for example is notoriously awfull at locking. Try to prevent queries that lock the tables for a long time (1 second is a long time :-) )
During a lock, all queries are put on hold, and when the lock is removed... vooom!
High-load city.
and a few loose notes:
Objects take more memory and may be slower because they need to be constructed etc. But that does not mean they are bad in a high-load environment. You primary concern is that the script needs to work. If using objects lets you keep your code running, then objects are the way to go.
Slow but working is better than fast but not quite working.
What is an acceptable number of db calls per page? Really can't tell.
If you have a 486 based db server, you'd try to go for just one call. If you have a quad Sparc 800 with 8GB of memory and you are talking to Informix via IPC, 100 would be acceptable. On the other hand, if you'd be selecting 45M rows of 200kbyte each....
So you see, it's really down to how fast you can handle your queries. 5 fast queries or 2 slow ones gives the same result.
Just remember that the longer a query takes, the more concurrent queries you will get, and the slower your database server becomes...
The link speed between the DB and webserver is 7Mbit/sec, that's more than enough. You'll be pushing your Webserver to the limit before you get to 2-3MByte/sec (you do have a crosslink or switch there, not a HUB I hope?)
Did I mention your webserver needs loads of memory too?
Run TOP or SAR if you have it, and check the load while you beat the server with a stick.
Install more memory, then use Pconnect to save loads of time on the database connections themselves. Note, this really eats memory, but it can nock milliseconds off your PHP execution times.
Also, you can't do without the Zend Optimizer. That's another 50-100% speed gained in PHP.
There, that should keep you busy...
If I F'ed up the community will correct me in the ever polite fashion :-)