Hi,

just a quick one:

if i make a backup of all the systems MySQL tables, which are located in /var/lib/mysql and now all those files get deleted, can i simply copy the backups into the directory and all will be working again, or do i have to work with mysqldump's?

Thanks...
earswithfeet

    • [deleted]

    MySQL boasts that you can backup your data by copying the datafiles, but you have to stop the database/lock the tables before you copy the files.

    Mysqldump is a much more reliable method!

      Yes it is possible, but before you are making a backup, shut down the database, after backup you can start the database again.

        Oh,

        i see.
        So, what could happen, if i don't shtdown the database in the meantime?
        Or lets say, what would you recommend as the best option for creating backups of all system wide MySQL databases (on a RedHat 7.x system)?
        A "niced" mysqldump of all tables?

        Thanks for your replies!

        earswithfeet

          Thanks a lot 🙂

          ...well, seems i will have to update my backup script 😉

          earswithfeet

            14 days later

            Hi,

            Some hosts don't allow us to use the mysqldump program, system calls are forbidden. So the solution would be to dump it like PhpMyAdmin does it, by creating MySQL statements. But I really wonder how we can shutdown the database. Special if we're not the host. I think the solution would be to lock the tables and to use the backup SQL statement.

            JM

              • [deleted]

              " I think the solution would be to lock the tables and to use the backup SQL statement. "

              Not very acceptable if you have a busy site.

              Most (if not all) hosts already make regular backups of the databases. If you suck up to them they are likely to make a new crontab entry to give you your own private backup of your data.

                Ah,
                ok...
                but how do they make the backups?
                I doubt they lock the tables, since clients on their servers could come from all around the world, thus there wouldn't be s.th. like a time-frame, where the servers aren't used?
                Btw: i experimented a little with both options: fily copy (via rsync) and mysqldump
                ...and must say, that the file copy is much safer (or i am missing s.th.) than the mysqldump.
                mysqldump simply stopped on several occasions, when tables were broken or contained other errors and i didn't manage to make it ignore those errors 🙁

                  • [deleted]

                  "but how do they make the backups?"

                  Probably once using mysqldump, and once using the complete system-backup which accedently also includes the datafiles.

                  "...and must say, that the file copy is much safer (or i am missing s.th.) than the mysqldump. "

                  Indeed you are missing something.
                  Backing up the files seems safe, but requires that you lock the tables and preferably shut down the database before you start.

                  More to the point, backing up files means you backup everything, including any inconsistencies in the data. Should one of these inconsistencies prove to crash the database later, then restoring the data-files brings the datanase server back into the state it was in right before it crashed, which means it will soon crash again. Using mysqldump means you backup the SQL required to build the datafiles, and SQL cannot build inconsistencies.

                  One thing to note about mysqldump is that you should always do it from a crontab process, and never ever through a web-interface. And why because when you do it through a web-interface the process stops when you press 'stop' in your browser or when the page times-out (think of the infamous 30 second limit in PHP).

                    ok,

                    ...but is there any way i can make sure, that if i am going to make use of cron-jobs, that the mysqldump will work properly?
                    As said before, i happened to experience errors on tables, which stopped the mysqldump process imediately.

                    ...but the part with the file-based error restopration during the backup process sounds reasonable. Thanks 🙂

                      Originally posted by vincente
                      One thing to note about mysqldump is that you should always do it from a crontab process, and never ever through a web-interface. And why because when you do it through a web-interface the process stops when you press 'stop' in your browser or when the page times-out (think of the infamous 30 second limit in PHP).

                      Are you sure 😉. On a *nix system, wouldn't a new process (mysqldump) be forked and continue on even if the script timed-out?

                      But I agree - stick it in crontab.

                        • [deleted]

                        "Are you sure . On a *nix system, wouldn't a new process (mysqldump) be forked and continue on even if the script timed-out?"

                        Yes, but that's only half the story.
                        Especially on large databases you'll want to split the dump up into smaller parts (think of log-tables, they are huge and you'll want a backup, but you don't have to restore them when the master data fails) And if the script timesout inbetween multiple dumps, then the entire backup is invalid.

                          ok,
                          but i am not getting any kind of timeouts, more some kind of error message.
                          I will try to get a proper copy of the message and post it in here.

                            Write a Reply...