I have recently developed and installed a database for a small business, running on a Debian cluster in their office.
Now, we want the database running in their office to export some of its' data (4 tables out of 13) to an external shared host webserver, where that information will be used to automate some of their web content. Soon after that, I will need to accept some information at the website and replicate those changes back to the home database. This replication doesn't need to be real time, though that would be nice.
I don't want to replicate the entire database, only four tables, and maybe only certain rows in those tables
The internal database is behind NAT and firewall, only ports 22 and 443 are open. My current plan is to export the relevant tables as SQL files via mysqldump and copy them to the webserver via scp. A cron job on the webserver would look for updated files every few minutes and when changes are discovered, it would reload the web database content from the new files. All the database entries are timestamped, so I could export only recently-changed rows (though this database is small enough that it may not matter).
This solution feels somewhat clunky to me, though, and so I wanted to know if anyone had better suggestions.
Thanks!