Here's the problem with that. One of the key points of using a database is you can use transactions to ensure that either every change got made, or none of them got made and an error gets thrown to the right place to let you or the user know.
Generally related information should reside in the same database. If you decide to run several databases, the only way to make it transportable via ODBC or abstraction is to use things that should work on most databases.
Connect to messages db and users db. insert message into messages db. update user info. User info update fails.
How do you recover from this? The message is posted, but the user profile, maybe with links to the user's messages, didn't get updated. You the programmer have to go and roll back the first insert into messages to fix things.
Put them into the same database:
begin;
insert into table1;
delete from table2;
update table3;
update table4
commit;
If one fails, they all fail. You just throw an error telling which one failed.
The idea of putting your data into all these different databases may seem like it's helping, but in fact it may make your system crawl if you design it around a multiple database system.
Hard drives are VERY cheap. A 1 terabyte raid array is $23,000 nowadays. Putting all the data in one database isn't a problem for any real dbms system, including mysql. If you think a 23,000,000 row database is slow, just wait till you run an outer join against it from another 23,000,000 row database. ugh.
I'd much rather have the two large tables in the same database, where indexing is sure to work from one product to the next in a predictable fashion.
Your boss wants you to build a brittle system using proprietary dbms calls where none would normally be needed if not for his scheme. Sometimes you gotta just tell them the truth, that their idea is not the best way to do it, and here I don't think it is.