First, I wouldn't worry so much about table size if your tables are only storing numeric data. Numbers don't take a whole lot of space and are fast and efficient to query and manipulate. That is why it is a good idea to have a "Users" table where users are assigned a User ID, then you can use that User ID (numeric) in your other tables - as opposed to querying for a full text name. Many of my tables (especially lookup tables) contain nothing but numeric fields. Text data, such as user info, is generally queried just once at login and then loaded into a session variable.
Second, if you get into a situation where your website performance starts to suffer, a good solution is to archive or split the data to a second table. So for example, if a user queries a transaction older than a certain date, your code should know to check in the trans_archive table, rather than in trans_current table. Or perhaps you might split tables based on a transaction type. For example, if the user queries a credit card transaction it might check the cc_trans table, whereas if the user queries a "terms" transaction, it might check the terms_trans table. But I wouldn't necessarily worry about table performance unless you know in advance that it will eventually become a problem.
If the database becomes large enough down the road to justify it, I would look into distributing the database across several servers. Google uses this strategy to give us those ultra-fast searches we are all accustomed to, though I think the actual number of Google database servers is strictly confidential.