MySQL does have its problems, but they are not related to speed. 10,000 records is nothing to MySQL. 10 GB, on the other hand, is a little more data than I would trust to MySQL. Have you checked into PostgreSQL? It apparently scales much better than MySQL, although it takes a lot of tuning.
With MySQL, the real question about backup is, "How on earth can you guarantee data integrity if you don't go offline for a backup?". MySQL has no way of keeping your related keys together, so if you are backing up in the middle of operations involving INSERTs and UPDATEs, the difference from the beginning of the backup to the end can produce all kinds of anomalies. There is only one way to maintain data integrity with a live dump: use "mysqldump -l", which locks all tables before starting the dump, and doesn't release until done. Now, this will still allow for reads, I believe, but writes cannot occur, so you are essentially down anyway. (http://www.mysql.com/doc/m/y/mysqldump.html)
...the load of the dump more or less pulls the entire system down (php times out, etc.) <<
Why is PHP involved in a dump?
Anyway, if you go offline, and turn off all processes besides mysqld, and you use the "mysqldump -l" switch, then the backup can occur much more quickly. Essentially the dump takes place at pure file I/O speeds.
Secondly, if you go offline for just a few minutes, you don't even need to use "mysqldump". Just backup your mysql data directory, and you are fine. And don't take the server offline for writing to tape! Just copy the data to another directory, start up your server again, and then let the tape take it's time archiving the copied directories. For a decent RAID system, copying 10 GB to another directory should take only a few minutes. Maybe even seconds, if you use a binary dump program, instead of simple file copying.
Another question: how often are you optimizing your tables, or flushing the connections?