Thread: using shared_buffers during seq_scan
Hi All!
In what way i can optimize seq_scan on big tables?
Thanks!
There is parallel sequence scanning coming in 9.6 -- http://rhaas.blogspot.com/2015/11/parallel-sequential-scan-is-committed.html
And there is the GPU extension - https://wiki.postgresql.org/wiki/PGStrom
If those aren't options, you'll want your table as much in memory as possible so your scan doesn't have to to go disk.
On Thu, Mar 17, 2016 at 5:57 AM, Artem Tomyuk <admin@leboutique.com> wrote:
Hi All!Is Postgres use shared_buffers during seq_scan?In what way i can optimize seq_scan on big tables?Thanks!
Artem Tomyuk wrote: > Is Postgres use shared_buffers during seq_scan? > In what way i can optimize seq_scan on big tables? If the estimated table size is less than a quarter of shared_buffers, the whole table will be read to the shared buffers during a sequential scan. If the table is larger than that, it is scanned using a ring buffer of 256 KB inside the shared buffers, so only 256 KB of the table end up in cache. You can speed up all scans after the first one by having lots of RAM. Even if you cannot set shared_buffers four times as big as the table, you can profit from having a large operating system cache. Yours, Laurenz Albe