Hello
I have quiet some databases with a lot of operations (insert, update...)
do you advise to make a daily optimisation of the tables ? what about a repair in case of pb....
bye
Hello
I have quiet some databases with a lot of operations (insert, update...)
do you advise to make a daily optimisation of the tables ? what about a repair in case of pb....
bye
yes this is the idea to run an optimize each day
about a daily repair..any risk ?
As to the repair, here's a quote from the MySQL manual...
Normally, you should never have to run this statement. However, if disaster strikes, REPAIR TABLE is very likely to get back all your data from a MyISAM table. If tables become corrupted often, you should try to find the reason for it and so to eliminate the need to use REPAIR TABLE
As to the optimize...
In most setups, you need not run OPTIMIZE TABLE at all. Even if you do a lot of updates to variable-length rows, it is not likely that you need to do this more than once a week or month and only on certain tables
Further to bradgrafelman's post, you should benchmark your performance when the db is new and periodically collect current performance data, say once a week at first. You will learn which tables and queries deteriorate and can then set up an appropriate maint program. Generally you will find that only a few tables ever need to be optimised or re-indexed, and then only after particular operations.
One of the things you will find is that there is loads of info about db design and construction, even about disaster recovery and migration, but not a lot out there about basic 'housekeeping'. It is drab, boring stuff that no-one particularly wants to think about or write about, but it has to be done just like servicing your car.
As you build your db you should also build your maintenance plan and tools. Just created 2 related tables? Now write the queries to test for widows and orphans. Any data that is supposed to be within prescribed limits should be tested for out-of-range, and any application-critical data should be especially vetted. Set up a script that runs the lot on a regular basis and what you will get most of the time is nothing. Which means everything is OK only now you know it instead of hope it.
Hmm. That's interesting. I didn't even think about the maintenance involved. I just assumed that it works, and when it doesn't, you go to your backup copy. But I guess like a hard drive where you need to defrag it regularly, you need to do the same to the db. So, any tips/tricks/suggestions for keeping the db maintained? Thanks.
There is normally no need to do any regular "tuning" of a MySQL table. It will automatically maintain all the structures needed.
However, it doesn't automatically create indexes for you, so you need to ensure that the right indexes exist- but once you've created them, it automatically maintains them for you (like any other RDBMS).
The OPTIMIZE and REPAIR are really for exceptional circumstances - REPAIR is normally only required if something fairly drastic has gone wrong.
Mark
Cool. It looks like each successive version of MySQL does a better job of maintaining the integrity and structure of the db, right?
Yes, after 30 years RDBMS technology has got pretty smart. DBs self-optimise and integrity-check. It's great.
But all that that means is that you are freed from worrying about the vehicle to concentrate on the contents. You see, ultimately the quality and value of a database is down to the quality of it's contents.
Just because the table structure, indexes and relational integrity are fine does not mean that the actual data in the records is not crap. Users will do everything possible thing to stuff your data, sometimes you will not be able to believe that someone could do that by mistake - it must have been maliciouse. But no, it was just a user being a luser.
Also the laws of physics do apply and chaos reigns. Bits do get flipped and index keys become corrupted, data gets lost. You may also have subtle flaws in your design that are not immediately apparent, but that will eventually mean that you cannot get out all the data you put in.
With a small db these tings may not matter, but with millions of records you really must implement processes that test and detect such problems before they get out of hand.
I should be fine then. The main table in my db at most will hold maybe a couple of thousand records (if I'm lucky ). Anything that's been in the main table longer than 30 days will be moved to an archive table. And from this table everything will be deleted after an additional 60 days (btw, thanks for the help with this topic, you helped A LOT). So I never really see my db growing to be that big. Besides, if it does, at that point I'll be making a killing, so I'll be able to hire someone to take care of this