Well? There are so many ways to do all of this, and so much depends upon the application and your platform, a general answer is difficult.
First thing to code for the project is your error recording. This will be your fall-back if an archive operation fails.
The easiest and most maintainable and quickest trigger to code is one for each table that takes a snapshot of the whole record after any update. Then you don't have to worry about which updates you do need to record. It is sod's-law that down the line you'll find yourself wishing you had recorded some of the stuff you didn't think you'd need.
All triggers should opperate as transactions, that is if the archive query fails then the update is rolled-back and an error condition dealt with by you in your code.
You may not be able to afford the disk space that whole-record snapshots entail. Just record what you need and can handle.
If you don't have triggers then in code procedures you again use transactions to first update the main record then record the new data in the archive. If the archive query fails then the update is rolled back. If either query fails you have an error condition you need to handle.
You will notice, I practice archive after update since I record current state in my archives, some would archive before update and only archive current state just as it is replaced and becomes history. It is very much up to you which you choose.
Either strategy allows you to roll back to a previous state if the user wants to, or you do.
I do it because the archive can be located in a different db on a different disk or server. It always contains the latest state so makes recovery a lot easier if the main db gets corrupted. Keeping the main db small makes it faster in a production environment where user's look-up the data more than they edit or add to it.