If you had a read of the manual you would see that this sort of replication is based on the slave getting a copy of a binary log where the master keeps a record of all db changes. The slave then applies them to it's own copy of the db. Although binary logging does involve some overhead for the master, it is not too great compared to the overhead of an actual table insert or update. Sending the binary log file to a slave consumes only 1 thread on the master as well, leaving many threads to continue it's real work.
Master/slave, or master/slaves, is a very good way to load balance a busy db. All updates on 1 server with multiple servers handling the reads. Normal db load in most environments is 8:1 reads to writes.
The manual also has important information about how to script for master failure in that configuration - with one of the slave being promoted to master, etc.
CSN, my suggestion of master/slave replication was for improved backup, with the added bonus of no downtime or lock conflicts when the backup is run. You get a backup that is as up-to-date as your replication schedule permits.
The other factor was that you could replicate to a backup at a totally different site, giving real disaster protection and recovery. In such a situation, it will almost certainly be on a different network, or subnet, and so dns will need to be updated.
Now, if it is a change to the 'glue' record at the registry, then yes dns will take a long time to replicate through the net. If it is only a dns update on your name servers then you can flush the old record and the update is done. Of course, routers with dns cache may delay things, but with a short TTL this should not be for long. If it really did become a problem then someone has misconfigured the router cache. Even then, an admin worth his salt could force the dns update through the routing protocol being used. (would have been RIS when I was administering web infrastructure but I don't know what they use now)