The only way to benchmark a database is to benchmark it against how it will be used. If you are building a system that will have 1000 active users, of which 5% will be committing changes every minute, that is a very different scenario from one in which you will be pooling connections using only say 20 active readers, and with updates occuring once a data at midnight.
So, first you have to decide HOW you'll be using your database, then you have to build an artificial implementation that approximates this use and test it with that.
While running such benchmarks, your primary purpose isn't to get the maximum throughput per second necessarily, but to see where the knee in said performance occurs.
Figure out things like how much memory/cpu/IO bandwidth each connection is consuming so you can model how far you can go with a given piece of hardware.
I will go so far as to say that the performance of a database with a single user is useless thing to know, as it doesn't usually apply to any real world application.