I have a quick question; hopefully it has a quick/easy answer. 🙂

What would be the best way to utilize SQLite across multiple servers? That is, let's say you have 3 servers -- one primary database server and two basic web servers.

Right now our DB server is running MySQL, and we can hit into that by using the internal IP address of the DB server. Would it be possible to also allow the two web servers to hit an SQLite database stored on that server? If so, how?

Thanks in advance. 🙂

    I would expect to say "no" with a caveat...

    If you set up on your DB server a share that both the web servers link to, then it's possible to create a sqlite database inside of that "shared drive". The catch is, what happens when the disk fails to mount or during relaying of data something happens to the stream?

    Typically SQLite databases are meant to be on the same server as the application. It can be done, but it would require a little bit of work to get it set up properly. I haven't tested anything, but it would seem like it would work, as long as the proper permissions are granted.

      That's what I was afraid of. I'd prefer not to set up a shared drive for performance reasons.

      I'll just use MySQL for this particular application.

      Thanks. 🙂

        12 days later

        This is not entirely correct. The beauty of SQLite is that you can use it to hold, manage, or process data outside of a main database. It depends on what your needs are, you can create a routine that would export data (or a subset of data) from your MySQL database insert it into a SQLite DB and use it locally on your web servers. As long as your data doesn't need to be dynamic (on-demand/realtime) this should be a workable solution. However, SQLite is not easy to work with for dynamic web pages and you would have to develop your web interface yourself for your application. But once its done SQLite is a great way of storing and managing data outside of a main database. Another feature of SQLite that will help in manipulating data is that you can take a DB that is created on a nx box and move that DB to your Windows desktop and work with that DB and then move it back to a nx box without conversion. The only difference is the executable SQLite program that reside on the respective machines. I wouldn't discount SQLite without looking at the entire picture in regards to you your needs/requirements.

          carlos25: The question wasn't about synching databases every once in a while. The question related to hosting a SQLite database on server B while server A is the web server. Many servers put their RDBMS on a separate server so that the CPU can focus on one thing (databases). So in mySQL, it's as easy as stating: [man]mysql_connect/man; or [man]mysql_connect/man; to connect to any machine anywhere in the world. You can use an IP or a domain name.

          With SQLite, you have to have write access to a file to do anything with it. So while yes, you can sync it every day, for the purpose of comparing it to mySQL where you can connect to any server anywhere from a different server, you MUST be sharing a folder in order to do that with SQLite. It's just the way it works unfortunately.

            To add to bpat1434's point, the SQLite page that lists some Appropriate Uses For SQLite also states that:

            If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.

            A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.

            Oh, and hi M.S., I still remember rykos and temporal sands.

              bpat1434:

              Yes, I understand what you're saying. However, you can still retrieve a data set from an MySQL server DB insert it into a SQLite DB and use that data locally however you need to without putting stress on your main DB server. I do this alot when my remote servers are in locations that don't have persistent internet connections or have special security concerns. The original post didn't clarify whether or not the data had to be dynamic or if it could be static. I understand that the original posting stated that what was desired was to host the SQLite DB on the DB server. Your response was correct in that regard but didn't expand on a possible solution or another way of looking at solving the issue.

              It is true, however, that the web servers couldn't directly connect to the DB sever using SQLite as like a traditional MySQL connection would be unless shares are established using SQLite - but lets expand the debate - What would be the point of using SQLite to handle data (on the main DB server) when your web servers could just connect to the MySQL DB and handle the data from that connection itself? And if load stress was a concern (they are still using bandwidth on the main DB server) you would be needing another server to handle those requests. I think there is more to the question then what has been conveyed in the posting. I just wanted to point out that there are more ways to look at this then just a direct connection (share) between servers.

              SQLite can be useful if used in a scenario like this:

              User requests data on web server > web server process that request and retrieves a data set from the main DB server and creates a SQLite DB for that data set locally > the user can work with that data set > when finished, the web server (application) can update the data set to the MySQL DB (main DB server)

              This does many things - It takes load off of the main DB server, reduces data corruption or enhances data integrity, reduces the amount of unwanted/unneeded data the user has access to, and its disposable (the SQLite DB can be discarded after use).

              But I digress... More information is needed to fully know what is needed.

                What would be the point of using SQLite to handle data (on the main DB server) when your web servers could just connect to the MySQL DB and handle the data from that connection itself?

                That assumes that there is a MySQL (or PostgreSQL, etc) database to begin with, but this would not be the case if the current MySQL database was replaced by the SQLite database, or if the SQLite database was intentionally separate. After all, if the intention was to use the MySQL database as it is currently used, then there would be no question in the first place, since the status quo would be maintained.

                  Okay, let me step through this...

                  carlos25 wrote:

                  The original post didn't clarify whether or not the data had to be dynamic or if it could be static. I understand that the original posting stated that what was desired was to host the SQLite DB on the DB server.

                  Let's look at the original post:
                  [indent]

                  rykos wrote:

                  Would it be possible to also allow the two web servers to hit an SQLite database stored on that server? If so, how?

                  [/indent]
                  That question right there tells me that they want to use [man]sqlite_open/man just like the [man]mysql_connect/man function in that they want to utilize SQLite instead of MySQL but keep the DB on the shared server behind the webservers. So both webservers (which I'm guessing are loadbalanced) can utilize the same data at any given point in time.

                  carlos25 wrote:

                  What would be the point of using SQLite to handle data (on the main DB server) when your web servers could just connect to the MySQL DB and handle the data from that connection itself? And if load stress was a concern (they are still using bandwidth on the main DB server) you would be needing another server to handle those requests.

                  Well, if they wanted to use SQLite, they have their reasons. Portability, speed, functionality. Who knows. But there are reasons to use SQLite over other database systems. Now, since they already have a system setup (2 webservers and a DB server) and they wanted to add SQLite, the only way to really do that and utilize it on both servers was to put SQLite on the DB server. Now, you don't get charged for "in-house" bandwidth. You can get as much throughput as you want between your own servers in the same data center. They don't go anywhere since all 3 are behind the same switch and the request never goes any further than the switch. So there's literally no cost to adding SQLite in this way (except performance).

                  User requests data on web server > web server process that request and retrieves a data set from the main DB server and creates a SQLite DB for that data set locally > the user can work with that data set > when finished, the web server (application) can update the data set to the MySQL DB (main DB server)

                  What you're talking about right there is a transaction in the most simplest of terms. And MySQL and PostgreSQL do this already. But let's poke a hole in that. You have two users working on data. They both grab a "snapshot" of the database (1 request) and create a SQLite database. They do what they want, and then it gets synced back with the MySQL database (? requests). Now, what if both users edit the same data to say two different things? How can you ensure data integrity? How will you be able to tell the user "Uh oh!! Someone changed something while you were editing. Please look them over and make changes."? You can't do that using SQLite like you state; however, using a RDBMS you can easily do that (if you set it up right). So using a regular mySQL connection, you'd fire off maybe 2 or 3 requests to update that one row, while with syncing and such, you're never sure if data that was previously edited (or since has been edited) will be overwritten by another users modification.

                  The essence of the question was: can you put SQLite on a separate server and utilize it in much the same way as a other database systems (MySQL, PostgreSQL, Oracle, Firebird, MSSQL, etc.). The answer is yes and no. Yes you could do it; however, you lose performance because you're working on a shared drive/folder. You also run risk of not being able to open it if the share isn't properly set up in the startup directions. So if you're going to use a database server, then you're better off using mySQL or PostgreSQL. They support transactions (what you pointed out above) and can do it in a much more open fashion since you're not relying upon what is essentially a flat-file system.

                  While as a developer working locally it is okay to use SQLite to kind of "sync" your local database with the remote, it's not a very efficient and useful solution for a production environment. It just doesn't work as it was intended.

                    Write a Reply...