It really depends on the uniqueness of the other fields.
When your DB is getting towards the size you expect it to be (or just 'big') look at the 'uniqueness' of the data vs the size it takes up, and base other tables (or not) off that.
I'll give you a extreme example, just to illustrate:
lets say we have a table:
id (int primary key)
name (varchar 32)
company (varchar 250)
and you then enter in every employee of microsoft, amd, intel and, um, walmart.
you end up with a few million entries, and the id will be unique, the name will have perhaps 1 million unique entries, and the company will have... 4 unique entries.
This means make a 'companies' table, with 4 entries, and an int primary key (could probably even be a small int, depending how many companies you plan to add) and then change the employees table to be:
company( int )
and link it up to the new table.
of course, if you only are ever going to have a few hundred or thousand entries in your table, and all fields are fairly small, then there's not really going to be much gain from splitting it out (or much loss if you split it out unnecessarily) so it's not something to worry too much about.
And if performance is an issue, and the data is large, then do the profiling suggested by Bjom regardless of what anyone advises, and probably also look at mysql setup on teh server, how much memory is allocated for what DB type, etc, and you can get even more performance out there.
(As a side note, we're currently moving our site to a new server cluster, with 2 servers purely running mysql, one as a master, the second as read only, synced together, optimized for innodb, used by 5 other servers, 3 load balancing the main site, 1 running https, and one for development, and to be honest, that alone gives me a headache 😛)