I have a content distribution script that I wrote that works beautifully, handles just about everything I can think of.

Code, docs, etc at:

http://www.shrum.net/code/cats

Now I'm to the point where I want to stress test it and the first thing I saw was a rapid decline in output speed when I increased the number of entries that I wanted it to process.

This seemed strange to me as the operations were the same for all the records.

System config can be found here:

http://sysinfo.shrum.net

Here's what I'm seeing (time is inclusive of entire script; start to finish...not just MySQL query time):

50 entries = 0.5231 secs ~ 96 per sec
100 entries = 1.2029 secs ~ 83 per sec
200 entries = 4.2847 secs ~ 47 per sec
300 entries = 11.5797 secs ~ 26 per sec
400 entries = 18.9632 secs ~ 21 per sec

Is there anything I can do to make this faster? Or is this a issue with the OS / hardware?

I'm not asking for a full-on code review (yet)...I just want to be able to do at least 1000 records without having such a bottle neck. I'm hoping it's just an uneffecient function call or something simple like that.

TIA

    Its a bit difficult to identify the cause without any code / process information.

    It may be an OS / Hardware issue - but I'm not 100% sure what your stress test is doing! Does 50 entries mean there are only 50 items available for distribution ?

    You could place timing debugs / outputters throughout your code to see if there is a dodgy function and identify if it is more of a coding issue.

      What does your memory do when you're hammering the system? I don't know how many "entries" were involved, but when I looked, you had about 10Mb RAM available; as soon as that's exhausted and you're running on virtual memory your performance would take a major hit.

        Thanks. I'm looking into the physical/virtual memory suggestion...it makes sense. I've sent my ISP an email (there are only 2 of us on the server...he's very responsive) :-p

        The code is a bit long which is why I included the link to the area where I maintain it.

        I sorta frowned (at this point in time) on the idea of putting in timers. Having to look at over a large number of timed events X # of timers I put in to evaluate the more important aspects seemed like a bit over the top. Granted I should do it at some point, I just wanted to make sure I was clearing the obvious hurdles first.

          Having your ISP review Kernel configuration and tune the server is alwasy a good idea. It is not likely to solve all of the issues unless the thing was totalyy misconfigured to start with. You may find these articles usefull when reviewing your code

          OOPs and Performance
          Proceduarl PHP leads to slower apps
          Here is a set of performance benchmarks from one of the article links how eficient is php

          The blog in my first link has some very clear insights into the performance problems that classes and functions can cause if their design has been too generalised with no regard to processing efficiency.

          My personal approach to all this is to implement my solution in prototypes untill I have the functionality needed. I then re-engineer the whole application purely for performance. In doing so I usually find places where I have gone all the way round the block to get next door.

            Originally posted by Roger Ramjet
            My personal approach to all this is to implement my solution in prototypes untill I have the functionality needed. I then re-engineer the whole application purely for performance. In doing so I usually find places where I have gone all the way round the block to get next door.

            Sounds like my approach: roughly, get it working right first, and then worry about it working well.

              Write a Reply...