Not sure where to start here.
All computers have resource limits (CPU, Memory, Harddisk space) and using them all will result in a crash (or at least an error condition). This could have happened in your case.
You didn't say what RDBMS you are using, but they have limits too. The number of databases, tables, indexes, and rows (among other things) all have limits. Again, depending on the database system, this could cause a crash or error condition.
If your database server is not severly memory constrained, then 10,000 rows in a table is not an issue. Probably 10,000,000 rows can be handled on modern hardware with out much sweating (RAM being the primary resource required). But it really depends on the complixity of the data manipulations you are performing on those tables. Simple (or even fairly complex) lookups on a single table will be very fast, even with many rows. JOINs and sub-queries will slow things down, sometimes drastically.
I have web server being fed data from a dual Athlon 2600+ with 2GB ram which routinely returns results from a six table join in about 5-15 seconds. The tables have between 10,000 and 100,000 rows. I can't even measure the time it takes to return a single table query (it is under one second) even from my largest table.
Personnaly, I would rather manage rows than tables. Without knowing more about the data you are manipulating I can't be much more help. I would guess having so many tables is less efficient than having more rows, but you will have to get the RDBMS designers to answer that question.