Thanks for your response, Weedpacket!
The problem is that the code to calculate the rates (bytes/interval, images/interval, etc.) does not live in the cron job which runs at regular intervals but lives rather in a long-running process in a particular subroutine that runs at very unpredictable intervals. The cron job's code is not privy to the data needed to make these rate calculations and, unless I'm missing something, the erratic subroutine can't simply average up the data from the past n intervals because these n intervals might reflect a lot of time or just a little time.
More specifically, the end-of-batch routine (which does have access to the number of images fetched and the number of bytes fetched) runs when a batch of 500 jobs has been completed. Given the variability of download sizes, response latencies, remote server load, etc., it might finish a 500-job batch in 50 seconds one time or 500 seconds the next.
The cron job, which runs every one minute or five, would simply be checking the averages calculated by the end-of-batch routine and stored to a database table.
Seems to me that, at the very least, the end-of-batch routine will need to take into account the time required to complete the most recent batch when calculating the weighted average. The number of images processed and the number of bytes downloaded is likely to be very similar for each 500-job batch. It's the time required to execute the batch that will likely tell most of the story about how fast things are going. The time required is at least half the story.
Seems pretty clear to me that the most recently calculated bytes/interval calculation should be weighted against the existing moving average in proportion to the time required to complete the batch. My calculus muscles are completely atrophied, but I suspect that, in order to determine the actual calculation, I need to make a decision about how much to weight our must recent batch against the accumulated moving average -- weighting the latest one lightly may inordinately favor historical values, making for a dull graph -- and I'm still not clear how the weighting over time should be labeled in such a graph. Favoring our most recent calculation over all the prior ones would likely make for a very jerky graph but could be called something like "bytes/sec of most recently executed batch" -- which might be fine.