I am working with a MS SQL2005 database and have a page where one can see results of their search. Using T-SQL, I have figured out how to return only a page of data at a time, and how to request the next page, and all that. My table presentation includes clickable column headers that re-query the tables with new "Order By" to get the new sort order.

It all works fairly well, but is certainly database heavy. The problem comes when a user executes a complicated (long) query that only returns a dozen rows or so. When they click on a column header to resort the results, the query goes back to the database and executes another long query. This delay doesn't work well for the user since he's thinking that I only needed to sort a dozen rows.

Is there a way to cache a dataset in PHP 5 that will allow me to sort and resort (and change number of rows on each page, etc) a lot quicker than what I am currently doing?

If so, is there an online article or such that will help me to gain familiarity and understanding of using this new area of PHP?

Also, how do I handle the situation where the query returns thousands of rows? In my current setup, the database only returns n number of rows at a time (default = 15). If I use some kind of dataset cache, will I mess myself up waiting for the initial load of the dataset?

Another way may be to create a temporary table with the query results, then use my current way of asking only for a page full of rows at a time. Might this be the best way?

Or is there a better way?

    I don't know if it's true for MS SQL 2005, but MySQL has a query cache. The queries you are running probably are probably different each time so it might not help to cache them. If you were to remove the paging and sorting related bits of your query and just grab all records that appear in your data set, then the query would not be any different as you paged or sorted and PHP would repeatedly query the database system for the same exact query -- meaning in MySQL at least that the results of that query would be cached as long as the result set did not exceed the size of the query cache (which is configurable in the MySQL configuration settings). In this situation, PHP would handle the sorting and paging so you would need to write your own routines. There are a variety of array sorting functions like asort, usort, etc.

    Another possibility is similar to that one I just described except you would need to write your script so that it would cache the query results either in session or in a file. Personally, I don't know if this is going to be any faster than just relying on the database.

    Temporary table sounds like a pretty good way to go, but only if these queries are pretty slow. These database systems are pretty smart about caching what they need to and handling data that must be sorted, etc.. If you choose to write something yourself, you could write something which does an even worse job. It's all going to depend on the queries you are running.

      I'm presuming all the relevant columns are indexed, and that you're using prepared statements so that SQL Server can cache the execution plans (SQL Server caches those on a per-application basis)? See its documentation for details about prepared statements and query tuning.

        Thank you for your help. If I did put my result into an array, I figure it would have to be an array of arrays to handle the 6 fields of each row. I wasn't sure how easy it would be to then sort and page it all. How would I sort it, and are there any special commands that would be a help when it comes to paginating?

          Weedpacket;10902378 wrote:

          I'm presuming all the relevant columns are indexed...

          It's the query that is using XPATH to search the xml field that slows things up, and I'm not sure I can index an xml field. I know I still need to do all the database tuning for this application.

          My main thought is to take stuff like sorting and pagination out of the database end of things. My thought being that this idea would scale better.

            tcarnes wrote:

            It's the query that is using XPATH to search the xml field that slows things up, and I'm not sure I can index an xml field.

            Add an additional column to the table that contains the XML; when inserting a new record, populate it with the result of the XPath query. Then search/sort on that new column. I forget exactly what SQL Server does with large fields - whether it keeps them in the table or moves them out of band. In the latter case the search/sort would never need to even load the XML itself.

            My main thought is to take stuff like sorting and pagination out of the database end of things. My thought being that this idea would scale better.

            Read the entire result set into PHP and sort/limit it there? I suspect that wouldn't be so good. Consider that you'd still need to do that XPath query.

              Write a Reply...