"can you please inform me whether this "synching" is going to choke the server performance?"
that depends entirely on how much data changes over a certain time, and how often you want to sync.
an Rsync process simply copies files from one machine to another over the network, so if you have a 100mbit link between the machines, you can send around 7mbyte/second, more than enough.
As for the database, most databases have their own replication scheme, you'll have to look into that yourself.
The simplest version of 'taking over' is to make the backup serve assume the identity of the primary server when the primary server goes down. Just make it so that the IP of the backup server changes to that of the primary server.
Problem: if the primary doesn't go all the way down, ie: it's IP service remains active, then you'd have two machines at the same IP, which is bad.
In those cases you'll need some extra hardware (can be a third server) that acts as a gateway. Then the outsideworld always talks to the gateway, and the gateway reroutes the requests to the primary server. If the primary server goes down, the gateway notices (through hearbeats) and redirects all requests to the backup server instead.
That way both the primary and backup can be 'alive' all the time, and never have to assume eachothers identity.
New SPOF: the gateway, but that is such a simple device that it's very unlikely to crash (or at least, much less likely than the busy webservers behind it)
On google, saerch for words like redundancy, backup, loadbalancing (which is basically sort of the same thing) heartbeat, or even just the question you asked here, google is clever like that :-)