My comments on more than one reply:
" I assumed we were dealing with fairly simple dataset that was reasonably organized."
Assumption is the mother of all f*ckups. 🙂
"That said with my p2 300 machine with 64 mb ram running the 2.4 linux kernel I have clocked mysql doing over 600 transactions per second"
Nice stats, but quite meaningless.
Try adding a few hunderd thousand records and add a few JOIN/ORDER BY queries, you'll soon find that 600 drops to 20-30 tops.
You cannot say anything about the performance of a database without knowing the databas structure, the amount of records and the types of queryies.
The network is 'a' bottleneck, but with only 6 users I refuse to believe that this is the problem.
"with that said..."rubbish" was prolly a bit strong."
My point exactly 🙂
"Plus...if we WERE dealing with 40 million records it would be a disc I/O and RAM issue and not a cpu issue. wrong?"
A disk/ram issue caused by bad database design. Indexes prevent sequential scans. If you have a good index you can read from a multi-million row table in a few milliseconds. Without a good index you can easily write a query that takes a minute on just 1k rows.
" If so what time of day is there a pattern to the slowness"
Good point, there could be some cron-based task like 'OPTIMIZE' that works heaviliy on the database server, eating away at its cpu-cycles.
MySQL is notoriously awfull at handling high-concurrency, it quickly snowballs to a halt. Well, you've seen that happen already haven't you. 🙂
As both machines are located at a hosting company, I think it's safe to say that the servers are 'part of' a network that has a VPN tunnel to vhshah's location. No sane hosting company would setup a VPN between two servers in the same room, let alone between a webserver and it's database server. (if they did: smack them over the head with a trout!)
Is it a shared database server or dedicated for you alone?
As for the show-status, the mysql manual has a few hints and tips about what's going on when certain values are high: http://www.mysql.com/doc/en/SHOW_STATUS.html
* If Opened_tables is big, then your table_cache variable is probably too small.
* If Key_reads is big, then your key_buffer_size variable is probably too small. The cache hit rate can be calculated with Key_reads/Key_read_requests.
* If Handler_read_rnd is big, then you probably have a lot of queries that require MySQL to scan whole tables or you have joins that don't use keys properly.
* If Threads_created is big, you may want to increase the thread_cache_size variable. The cache hit rate can be calculated with Threads_created/Connections.
- If Created_tmp_disk_tables is big, you may want to increase the tmp_table_size variable to get the temporary tables memory-based instead of disk based.
"We were trying to build on the theory of locking affecting the performance."
Simple: Lock = bad.
What kind of operations do you do that 'require' such a lock? And why not change to InnoDB tables, because they support transactions.