Okay, let me step through this...
carlos25 wrote:The original post didn't clarify whether or not the data had to be dynamic or if it could be static. I understand that the original posting stated that what was desired was to host the SQLite DB on the DB server.
Let's look at the original post:
[indent]
rykos wrote:Would it be possible to also allow the two web servers to hit an SQLite database stored on that server? If so, how?
[/indent]
That question right there tells me that they want to use [man]sqlite_open/man just like the [man]mysql_connect/man function in that they want to utilize SQLite instead of MySQL but keep the DB on the shared server behind the webservers. So both webservers (which I'm guessing are loadbalanced) can utilize the same data at any given point in time.
carlos25 wrote:What would be the point of using SQLite to handle data (on the main DB server) when your web servers could just connect to the MySQL DB and handle the data from that connection itself? And if load stress was a concern (they are still using bandwidth on the main DB server) you would be needing another server to handle those requests.
Well, if they wanted to use SQLite, they have their reasons. Portability, speed, functionality. Who knows. But there are reasons to use SQLite over other database systems. Now, since they already have a system setup (2 webservers and a DB server) and they wanted to add SQLite, the only way to really do that and utilize it on both servers was to put SQLite on the DB server. Now, you don't get charged for "in-house" bandwidth. You can get as much throughput as you want between your own servers in the same data center. They don't go anywhere since all 3 are behind the same switch and the request never goes any further than the switch. So there's literally no cost to adding SQLite in this way (except performance).
User requests data on web server > web server process that request and retrieves a data set from the main DB server and creates a SQLite DB for that data set locally > the user can work with that data set > when finished, the web server (application) can update the data set to the MySQL DB (main DB server)
What you're talking about right there is a transaction in the most simplest of terms. And MySQL and PostgreSQL do this already. But let's poke a hole in that. You have two users working on data. They both grab a "snapshot" of the database (1 request) and create a SQLite database. They do what they want, and then it gets synced back with the MySQL database (? requests). Now, what if both users edit the same data to say two different things? How can you ensure data integrity? How will you be able to tell the user "Uh oh!! Someone changed something while you were editing. Please look them over and make changes."? You can't do that using SQLite like you state; however, using a RDBMS you can easily do that (if you set it up right). So using a regular mySQL connection, you'd fire off maybe 2 or 3 requests to update that one row, while with syncing and such, you're never sure if data that was previously edited (or since has been edited) will be overwritten by another users modification.
The essence of the question was: can you put SQLite on a separate server and utilize it in much the same way as a other database systems (MySQL, PostgreSQL, Oracle, Firebird, MSSQL, etc.). The answer is yes and no. Yes you could do it; however, you lose performance because you're working on a shared drive/folder. You also run risk of not being able to open it if the share isn't properly set up in the startup directions. So if you're going to use a database server, then you're better off using mySQL or PostgreSQL. They support transactions (what you pointed out above) and can do it in a much more open fashion since you're not relying upon what is essentially a flat-file system.
While as a developer working locally it is okay to use SQLite to kind of "sync" your local database with the remote, it's not a very efficient and useful solution for a production environment. It just doesn't work as it was intended.