Well, the aggregation has to be done either once on a medium to large set, or several times on smaller sets.
Assuming the single large set can fit easily into memory and doesn't need to spill to disk for sorting, then the one single large query should be faster.
On the other hand, if the one large aggregation is way big, then the many small queries may win.
But my money's on the single large aggregate being faster.
I use PostgreSQL, which has much improved its aggregation speed of late. I'm not familiar with MySQL aggregation performance that much, so if there's an inefficiency in handling large data sets being aggregated, the many small could win.
I'd definitely benchmark the two, both with the data in cache, and with enough time / activity between marks to allow the test to be done with empty caches.