you sure you cant just optimize your database to provide the increase in speed?
if you were to make you own database from flat files and directories, it could definately be very fast. i did exactly what your describing once by using a bunch of directories and flat files as a database, and the speed was definately incredible. but theres limitations.
for example, i had a database of zip codes and for each zip code, there was other info like state, latitude, area code, time zone and a bunch of other stuff. the way i needed to use this data, i always knew the zip code. so data for the zip code 54321 would be located at
zipcode_db/5/4/3/2/1.db
it was extremely fast, but i would never be able to find out what all the zipcodes where w/ a certain timezone, or all the zipcodes of a specific state. to query my db in such a way as to allow that, would take an obscene amount of overhead. i could only efficiently retrieve data given a specific zip code. but that was ok for my situation, so i traded optimization for flexibility.
i think you might be better off improving your current database structure, and refining the queries you make. ultimately it wont be as fast, but it could definately be more than fast enough.