Dealing with very large datasets
phpGrid can handle VERY LARGE database tables and perform well. Make sure to disable page count with enable_pagecount(). See demo link below for more information.
Loading from Large Database Table with 3 Million Records
In general, to improve overall performance of very large database tables, it is advisable to:
- Use either an INT or BIGINT datatype than say a GUID
- Partitioning may reduce the size of indexes, effectively reducing the table size
- More memory or even SSD drive can work wonders!
- Remove any unnecessary indexes on the table
- Remove SELECT COUNT(*) and always include a query filter.
But, wait! I have BILLIONS of records!
First of all, congratulate on becoming another Google or Amazon! You must be doing very well!
For very very very (yeah, 3xVery) large database tables, besides using enable_pagecount() and optimization tips aforementioned, we also recommend adding a sequence number column that tracks the actually paginated row beginning and ending number.
e.g.
SELECT
*
FROM
my_table
WHERE
seq > 100 And seq < 200
Some excellent posts/articles on this issue:
Using sequence:
http://stackoverflow.com/questions/1243952/how-can-i-speed-up-a-mysql-query-with-a-large-offset-in-the-limit-clause
Using InnoDB engine
http://stackoverflow.com/questions/5483489/how-to-increase-performance-of-limit-1-when-the-is-a-huge-number
Using “late row lookup”
http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/
Happy gridding!