I'm working on a clustered Wordpress site installed in the Amazon Cloud (uses ELB, EC2, RDS, S3). I'm trying to scheme up a stress test before go-live so we can see if it'll support the traffic load we want. We want to simulate "10,000 simultaneous users." I interpret that as meaning we have 10k users each clicking a page every 10 seconds or so. With that in mind, our system would need to service 60,000 page requests in one minute. I'm thinking I'll use Apache Bench and wondering a couple of things:
1) I'm wondering how many of these "users" a single machine might be able to simulate so I can determine how many machines I need for simulating load. I imagine this would depend on a variety of factors including CPU power, network connection speed, RAM, etc. Any thoughts on calculating this in an Amazon EC2 context?
2) We don't want a single computer asking for the same URL 10,000 times. Any thoughts on how to randomize page accesses? Do I need a list of all the URLS? I don't see any features of this command that permit requesting more than one url.
3) Given that I'll probably need numerous machines to simulate this kind of load, any thoughts on how to start/stop many machines efficiently?
4) Can anyone suggest a good sample command line for some beautiful graphs to wow the client?
5) What about cookies? One can apparently send the same cookie with every request, but there doesn't seem to be any capacity to accept cookies and elicit the corresponding cookie-affected behavior on the server. I.e, there are no session-savvy input values.
6) Does AB just request the file itself or would it also load all the images, css, iframes, etc. specified?
7) Any other considerations come to mind?
All thoughts and discussion welcome.