Hi,

We need to backup some large (10G+) MySQL databases. The databases cannot be taken offline while we mysqldump the databases, ant the load of the dump more or less pulls the entire system down (php times out, etc.)

We have tried binary logging for replication, but that broke down during high load.

Is there any way to dump the data (one way or another) for backup?

Yours
Lasse Laursen, Systems Developer

    sorry, i dont know that answer, but I have to ask...

    if its 10G, and in attempts up until now, you have \"crashed\" it... then why is there not persons saying, \"we have to do this offline?\"

    I work for a very large corporation, though I cannot say, it is the #1 maker of oncology systems in the world - see I didnt say it =)

    But even we need to take time on weekends to go offline. Hell Ebay goes offline every Thursday noc/Fri AM at 2am or so, to do their backups. Its part of the game when you have very large databases.

    The key is minimizing the impact.

    And I would figure that if MySQL is not suited to online backups, then a longer term solution would be to migrate to a more robust environment.

    If your IT director cannot see that at least you need to do it once while you investigate better options, then perhaps you need to point HIS boss to this thread =)

    Far better to shut down for a few hours than to possibly lose all that data, no?

      BTW, as far as long term options, I know that the normal sysbackup for an AIX 4.3 IBM server works for us. that is an oracle database w/ SAP running on top.

      We have never really had any calls during backups that response time is slow. And I know from doing the stats, that the avg. response time < 1s comes in at 80-90% and <3s btw 95-98%

      That is one our "main" server. If there is an "slowness" it would only be during backups and would be miniscule and probably transparent to users, given other transport/latency delays, that we could, "cough cough" blame =)

      My brother works as a Data Modeler for Schwab, and I asked him a few questions as they run oracle also, and since Ive been tinkering w/ MySQL for home/NOC projects, he was telling me, that sure, under 10,000 records, MySQL should do fine, but then it will run into problems - overhead, speed, etc...

      It is free, but free comes at a price too =)

        6 days later

        MySQL does have its problems, but they are not related to speed. 10,000 records is nothing to MySQL. 10 GB, on the other hand, is a little more data than I would trust to MySQL. Have you checked into PostgreSQL? It apparently scales much better than MySQL, although it takes a lot of tuning.

        With MySQL, the real question about backup is, "How on earth can you guarantee data integrity if you don't go offline for a backup?". MySQL has no way of keeping your related keys together, so if you are backing up in the middle of operations involving INSERTs and UPDATEs, the difference from the beginning of the backup to the end can produce all kinds of anomalies. There is only one way to maintain data integrity with a live dump: use "mysqldump -l", which locks all tables before starting the dump, and doesn't release until done. Now, this will still allow for reads, I believe, but writes cannot occur, so you are essentially down anyway. (http://www.mysql.com/doc/m/y/mysqldump.html)

        ...the load of the dump more or less pulls the entire system down (php times out, etc.) <<

        Why is PHP involved in a dump?

        Anyway, if you go offline, and turn off all processes besides mysqld, and you use the "mysqldump -l" switch, then the backup can occur much more quickly. Essentially the dump takes place at pure file I/O speeds.

        Secondly, if you go offline for just a few minutes, you don't even need to use "mysqldump". Just backup your mysql data directory, and you are fine. And don't take the server offline for writing to tape! Just copy the data to another directory, start up your server again, and then let the tape take it's time archiving the copied directories. For a decent RAID system, copying 10 GB to another directory should take only a few minutes. Maybe even seconds, if you use a binary dump program, instead of simple file copying.

        Another question: how often are you optimizing your tables, or flushing the connections?

          Write a Reply...