I think I've heard that the max size of mysql running on linux is 2 or 4 GB, can anyone confirm this?

What about postgresql?

Anyone know about MS SQL Server? Can't it store like 1 terrabyte?

Does PHP on linux work well when talking to a windows server running ms sql server?

Thanks in advance for any information or insight.

    Postgresql can handle unlimited amounts of data, limited only by your hardware for the most part. There are multi-terabyte databases running on it.

    The maximum size for a text field is 2 gigs. Maximum number of rows is unlimited, maximum number of fields in a row is 1599.

    More importantly than just being able to store a terabyte, is being able to do so in a reliable and speedy manner. No database server is gonna toss around 150 gigs at a time quickly. But a poorly designed one will have many more problems than a good one.

    Are you talking about lots of small rows in disparate tables, or a few VERY large rows in a couple of tables?

    Do you need lots of foreign key constraints, and are you going to do lots of fancy data extraction on it? Or a simple data store? Do you need Full Text Searching? Or are you gonna calculate the location on the globe of all the major cities?

    Our knowing what exactly you're gonna do with your database makes a big difference here in helping to give ya pointers.

      Oh, and yes, a linux box running PHP can talk to database servers on windows. Generally I use ODBC to do it, and it works fine. You'll need some odbc package installed on the Linux / PHP box and to use the --with-odbc switch when configuring PHP.

        For the most part there are a couple dozen tables that will each have tens if not hundreds of thousands of rows at some point. Estimates show we can hit several gigs of data pretty quickly.

        The only "complex" thing would be some BLOB fields we have to store and retrieve information from.

        The complex work really comes into the PHP arena and it calls the data where needed, but mysql is really just serving up the data. No major calculations or anything. Mainly reading and writing data.

        If it helps any, all this data can be quickly narrowed down based on an ID. We're talking from a million rows to maybe a couple thousand. If we narrow that chunk right off the bat, wouldn't that help a great deal?

          I thought I read about people having trouble with postgres aroudn 5gb or so. And wouldn't the linux limitation of 2-4GB have the same affect on postgres as it does on mysql? That's where I'm confused.

          It seems mysql doesn't have a limit, but it does running under linux. Something about 2GB file size limits or something.

          It's clear as mud at this point 😕

            I think you have some old information about postgresql AND linux. The 2 gig file size limit has gone away in kernels after 2.2. And postgresql itself has NO limit on the size of the database - the size is usually constrained by other factors.

              Originally posted by ultraslacker
              I think you have some old information about postgresql AND linux. The 2 gig file size limit has gone away in kernels after 2.2. And postgresql itself has NO limit on the size of the database - the size is usually constrained by other factors.

              Here's where I read about mysql:
              http://www.mysql.com/doc/en/Table_size.html


              MySQL Version 3.22 has a 4G limit on table size. With the new MyISAM table type in MySQL Version 3.23, the maximum table size is pushed up to 8 million terabytes (2 ^ 63 bytes).

              Note, however, that operating systems have their own file-size limits. Here are some examples:

              Operating System File-Size Limit

              Linux-Intel 32 bit 2G, 4G or more, depends on Linux version

              So wouldn't Linux have the same trouble with large postgresql data tables?

              We're running Redhat 7.2

                The 2Gb limit is a problem with some old versions of linux but all current versions handle much larger file sizes. The size constraint is on a table and not a db, as the db stores a table as a file, and hence the table size is restricted to max file size.

                There are lots of ways to work around this like using InnoDB and filespaces, but it is no longer an issue on linux.

                Do a quick search on google and I'm sure you will find out what the max file size is for your flavour of linux. It is certainly in the Tb realm but I'm not sure how large.

                Hope this helps 😉

                  Yep, there's your problem - mysql man pages do not contain up to date information - especially about postgresql.

                    Outstanding... that had me pretty worried.

                    Ok, one more newbie question, I see with possible redhat/linux "ext2/ext3" the file size is in the terrabytes, but how can I find out if our server is using ext2 or ext3?

                    I really appreciate the help guys, this will be a huge weight off my shoulders if this isn't a concern. 😉

                      wait I think I found it, command is df -T

                      says we're running ext2 --- does this mean I can go above the 2GB limit?

                        it depends on the kernel but ext2 can support upto 8Tb I think

                          Originally posted by nnichols
                          it depends on the kernel but ext2 can support upto 8Tb I think

                          Even on redhat's site and newsgroup I'm having trouble finding a clear answer. As long as I can confirm it's in the terrabyte range, I'll be fine.

                            linux 2.4.x kernel allows filesizes of "about" 4 Tb

                              Originally posted by nnichols
                              linux 2.4.x kernel allows filesizes of "about" 4 Tb

                              Ugh, it appears it's 2.2.19-6.2.1

                              at least that's what 'uname -a' showed me

                                depending on what flavour of linux you are running there are rpms you can download to patch the 2.2 kernel. You'll have to try a support site for your specific flavour.

                                Good luck

                                  It depends on your OS, but generally they all can support more than you want to store. In practice, of course, you'll hit limits lower than the theoretical max. For example, MySQL stores each table in a single OS-based file. Some versions of linux (the older 2.2 kernels) could not create files larger than 2GB, so your tables were limited to 2GB on those OS. The limit has since been removed in 2.4 > kernels.

                                    Yeah we're still running RH 7.2 (with 2.2 kernel) so we probably need to upgrade.

                                      Just to clarify:

                                      Postgresql splits ALL it's file at about 2 gig, so that a table might appear like this in the base/oid directory it lives in:

                                      1023234
                                      1023234.1
                                      1023234.2

                                      Where each one of those files would be ~2 gigs each.

                                      Postgresql can AND DOES handle terabytes. Look through the performance mailing list for folks doing just that.

                                      I'd suggest building a test box with both mysql and postgresql on it and seeing how they handle some arbitrary test data set you create for it.

                                        Originally posted by Sxooter
                                        Just to clarify:

                                        Postgresql splits ALL it's file at about 2 gig

                                        I'd suggest building a test box with both mysql and postgresql on it and seeing how they handle some arbitrary test data set you create for it.

                                        Very cool. Yeah, I plan on doing that.