So we're talking 5 megs of data then? Maybe a little more? you could use a database, but you'd be better served with htdig and flat files.
My postgresql documentation is 7836k, and this command:
egrep -ri index *
takes 0.695 seconds on my PI300. In case you aren't familiar with egrep, it lets you look through a bunch of files for a string. That command up there said to grep -r(ecursively) -i(ndependent of case) for the word index in all the files in this directory.
So, how do you want to search your data? Do you want phrase searching? Single word search, near matching based on soundex, mutex, prefixes, suffixes, alternate spellings, etc.???
You can write all that, or you can just put the files into a flat(looking) structure and run htdig across it.
It took htdig 47 seconds to index the 7.5 Meg postgres documentation directory on my box. After that, I can handle 34 such searches every second. It took me more time to write this message than it did to install htdig (honestly, it is that easy) and run it.
I'm not knocking mysql, but this kind of text searching is a lot harder than most people think it is to program.