Ok, you've established that optimisation is a desirable thing to do; now you must be able to measure quantitiatively how well it's doing.
You can of course benchmark things on your development server, which is good for measuring the effectiveness of specific optimisation attempts, but putting instrumentation on the production server is good to know where to start.
Normally I'd configure Apache to record the time taken for each (PHP) request- you can do this easily by using a custom log (which I normally call time_log). This will record the date/time of each request, what was requested and the time taken in microseconds. It is possible to have this log record only .php requests (which makes things easier for analysis and makes it smaller; I assume you have a high traffic site so your normal logs will be many megabytes per day).
You can then analyse these data to find the typical / average / maximum time taken by individual pages, and decide to target those ones specifically.
Back on your development server, you can then work on those specific pages to improve execution time.
Of course to know what to do, you may find it handy to have a profiler.
If there are some very long queries (which your PHP profiler may show up), you may want to put instrumentation into mysql, which also has the ability to log the time taken for slow queries, or all queries.
If a given query is taking a long time, it may be because you've not got appropriate indexes on the queried columns, or because it is doing a full table scan on a large table.
Sadly, it is often required to load some production data into your development server to measure performance. I personally hate this, and I would probably rather take a copy of the production DB, wipe out all personal / sensitive info, then dump that copy into the local dev / performance test database. That way you can have a realistic sized data set without having real user data in dev (which I really, really try to avoid).
Mark