Ooooh. That's a tuffy.
I'd suggest picking up a good SQL book like "SQL for Smarties" by Celko. It's a book every beginning db developer should read.
Basically good database design is a compromise between complexity, speed, and ease of maintenance.
A completely normalized database is usually the easiest to maintain, but often the slowest as well. Most people settle on 2nd or 3rd level of normalization and stop there so they don't have two dozen tables joined in every query.
Also, if you are going to design something with two dozen tables that are variously joined in strange and interesting ways, then benchmark your database doind exactly that. You may find a very real performance limit in some dbs as the table count in a joined query rises.
The very basic theory of normalization is that if you have the same exact piece of data repeated in a table, you should move that data to another table and do a many to one correlation.
For instance, if you have a field in the student table that lists major, you may have BSEE in that field 300 or more times. You could instead make at table called majors with an indexed autoincrementing key and a major, and that key is stored in the student table instead of the major.
But this gains you little. How often does the acronym for a major change? What if half the BSEEs get together and create a new major, BSET or something? Then you probably have just as much work to do with the normalized database as with the non-normalized one.
So, normalize things that need to be swapped around easily, and often are, but would be repetitive and error prone if you put them in the table directly.