There are really three possible ways to get a CONSISTENT backup:
If you use any non-transactional tables (e.g. the default MyISAM), and you are prepared for a short scheduled downtime, use mysqldump with its options to lock the database. This will prevent ANY writes to the database during the backup. Any attempted writes will block (be delayed) until the backup completes (Note that PHP may timeout those requests before that happens)
If you exclusively use transactional tables (e.g. InnoDπ, then you can take a backup in a transaction without locking anything.
If you are prepared to set up replication with a slave server, you can do backups on that without blocking out the live server - but beware of things which don't replicate correctly. You may want to set up a system to compare the data in the slave(s) with your master - this is by no means guaranteed.
100 Megabytes (I don't know whether that's the size of your data or data + indexes, you didn't specify) is NOT a large database. It is likely that mysqldump'ing it on modern hardware will take less than five minutes.
If you are prepared to have scheduled downtime once per day, you could have your backup script set a flag to stop people using your web site during the backup period. Equally, you could have a period during which you display a message to people before the scheduled downtime to tell them it's going to happen soon (so that they have time to finish what they're doing).
Our master database is about 1.5 Gb as a compressed dump file and takes about 10 minutes to do a full dump (on modern server hardware: Recent xeon processor, lots of ram, raid discs).
Mark