Hello,
I'd like to know what's the fastest way in term of processing :
I have a table with ID_author, Book...
Assume that the table has... 100,000 authors with 10 books each : 1,000,000 rows.
If an author connects to know which book is registred in the databse, what's the fastest ?
to have a big table in which mysql searches the results (10 between 1,000,000 rows)
to have this table splitted into small ones
by authors : 100,000 tables of 10 rows
I mean, in the first case, we can imagine that if one row weight... 1 Kb... we need at least 1 GB RAM to have the database in memory. I may be wrong because I have no idea how memory is managed by database managers.
In the other case, if an author connects, mysql has to load the author's table into memory (disk access) because we assume that the table was unloaded... it's a disk access. slow. but we need less RAM to have the system running.
My conclusion is : if the same 1,000 authors connect everyday, the second option is far better (maybe their tables stay in memory). but if many different authors connect, the first one if may be better. no ?
Bonus track : if a user has right to scan all books, we need a big table (as in the first exemple), so when a book is added we need to update TWO table (the big one and the author's).
In that case, which is my problem, is it not simplier to only have one BIG table ???