Thread: scan_recycle_buffers
Patch to implement buffer cache recycling for scans, as being discussed on pgsql-hackers. Applies cleanly to cvstip, passes make installcheck when used by default for all SeqScans. Tested with scan_recycle_buffers = 1,4,8,16 Should be regarded as WIP. Presumably there are some failure conditions that require the buffer to be reset; these have not yet been considered. No docs. SET scan_recyle_buffers = N default = 0 8 <= N <= 64 would yield benefits according to earlier results -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Attachment
Simon Riggs wrote: > Patch to implement buffer cache recycling for scans, as being discussed > on pgsql-hackers. A few questions come to mind: How does it behave with Jeff's synchronized seq scans patch? I wonder if calling RelationGetNumberOfBlocks on every seq scan becomes a performance issue for tiny tables with for example just 1 page. It performs an lseek, which isn't free. What happens if multiple backends choose overlapping sets of buffers to recycle? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Fri, 2007-03-09 at 20:08 +0000, Heikki Linnakangas wrote: > Simon Riggs wrote: > > Patch to implement buffer cache recycling for scans, as being discussed > > on pgsql-hackers. > > A few questions come to mind: Good questions. I don't expect this will go through easily, so we need to examine these thoughts thoroughly. > How does it behave with Jeff's synchronized seq scans patch? I've offered Jeff lots of support throughout that patch's development and its a feature I'd like to see. The current synch scan patch relies upon the cache spoiling effect to gain its benefit. I think that can be tightened up, so that we can make both work. Currently synch scans help DSS apps but not OLTP. This patch reduces the negative effects of VACUUM on OLTP workloads, as well as helping DSS. > I wonder if calling RelationGetNumberOfBlocks on every seq scan becomes > a performance issue for tiny tables with for example just 1 page. It > performs an lseek, which isn't free. Jeff's patch does this also, for similar reasons. > What happens if multiple backends choose overlapping sets of buffers to > recycle? They won't. If a buffer is pinned, it will fall out of the the list of buffers being recycled and not be reused. So they will each tend towards a unique list of buffers. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki@enterprisedb.com> writes: > I wonder if calling RelationGetNumberOfBlocks on every seq scan becomes > a performance issue for tiny tables with for example just 1 page. It > performs an lseek, which isn't free. We do that anyway; but certainly Simon's patch ought not be injecting an additional one. regards, tom lane
On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: > Heikki Linnakangas <heikki@enterprisedb.com> writes: > > I wonder if calling RelationGetNumberOfBlocks on every seq scan becomes > > a performance issue for tiny tables with for example just 1 page. It > > performs an lseek, which isn't free. > > We do that anyway; but certainly Simon's patch ought not be injecting > an additional one. It should be possible to pass that down from the planner to the executor, in certain cases. Or at least pass down the possibility that such a check might be worthwhile. Another approach might be to make the call after the first ~10 I/Os on a SeqScan, after which an lseek will be just noise. That way an all-in-cache scan would never need it at all. Thats easy to arrange because the hint is invoked from the exec nodes themselves. We probably need to get some measurements for the main benefit of the patch before we look further into those thoughts. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
"Simon Riggs" <simon@2ndquadrant.com> writes: > On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: >> We do that anyway; but certainly Simon's patch ought not be injecting >> an additional one. > It should be possible to pass that down from the planner to the > executor, in certain cases. Huh? See HeapScanDesc->rs_nblocks. regards, tom lane
On Fri, 2007-03-09 at 20:37 +0000, Simon Riggs wrote: > > I wonder if calling RelationGetNumberOfBlocks on every seq scan becomes > > a performance issue for tiny tables with for example just 1 page. It > > performs an lseek, which isn't free. > > Jeff's patch does this also, for similar reasons. > As Tom pointed out, the value is already in memory by the time it gets to my code. My code just reads that value from memory. Regards, Jeff Davis
On Fri, 2007-03-09 at 20:08 +0000, Heikki Linnakangas wrote: > Simon Riggs wrote: > > Patch to implement buffer cache recycling for scans, as being discussed > > on pgsql-hackers. > > A few questions come to mind: > > How does it behave with Jeff's synchronized seq scans patch? > I will test it and post my results. I would expect that the CPU usage will increase, but it might not make a big difference in the overall cache hit rate if you count OS buffer cache hits. Regards, Jeff Davis
On Fri, 2007-03-09 at 18:05 -0500, Tom Lane wrote: > "Simon Riggs" <simon@2ndquadrant.com> writes: > > On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: > >> We do that anyway; but certainly Simon's patch ought not be injecting > >> an additional one. > > > It should be possible to pass that down from the planner to the > > executor, in certain cases. > > Huh? See HeapScanDesc->rs_nblocks. Many thanks. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
On Sat, 2007-03-10 at 07:59 +0000, Simon Riggs wrote: > On Fri, 2007-03-09 at 18:05 -0500, Tom Lane wrote: > > "Simon Riggs" <simon@2ndquadrant.com> writes: > > > On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: > > >> We do that anyway; but certainly Simon's patch ought not be injecting > > >> an additional one. > > > > > It should be possible to pass that down from the planner to the > > > executor, in certain cases. > > > > Huh? See HeapScanDesc->rs_nblocks. > > Many thanks. New patch enclosed, implementation as you've requested. Not ready to apply yet, but good for testing. COPY command now also uses this hint, to allow test results and discussion. Others could also, perhaps needing different values. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Attachment
Simon Riggs wrote: > On Sat, 2007-03-10 at 07:59 +0000, Simon Riggs wrote: >> On Fri, 2007-03-09 at 18:05 -0500, Tom Lane wrote: >>> "Simon Riggs" <simon@2ndquadrant.com> writes: >>>> On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: >>>>> We do that anyway; but certainly Simon's patch ought not be injecting >>>>> an additional one. >>>> It should be possible to pass that down from the planner to the >>>> executor, in certain cases. >>> Huh? See HeapScanDesc->rs_nblocks. >> Many thanks. > > New patch enclosed, implementation as you've requested. > > Not ready to apply yet, but good for testing. > A quick test using the setup for "Buffer cache is not scan resistant" thread: Firstly vanilla 8.3 from 20070310: Shared Buffers Elapsed vmstat IO rate -------------- ------- -------------- 400MB 101 s 122 MB/s 128KB 79 s 155 MB/s [1] Now apply cycle scan v2: Shared Buffers Scan_recycle_buffers Elapsed vmstat IO rate -------------- -------------------- ------- ------------- 400MB 0 101 s 122 MB/s 400MB 8 78 s 155 MB/s 400MB 16 77 s 155 MB/s 400MB 32 78 s 155 MB/s 400MB 64 82 s 148 MB/s 400MB 128 93 s 128 MB/s Certainly seems to have the desired effect! Cheers Mark [1] I'm not seeing 166 MB/s like previous 8.2.3 data, however 8.3 PGDATA is located further toward the end of the disk array - which I suspect is limiting the IO rate a little.
On Sat, 2007-03-10 at 23:26 +1300, Mark Kirkwood wrote: > Simon Riggs wrote: > > New patch enclosed, implementation as you've requested. > > > > Not ready to apply yet, but good for testing. > > > > A quick test using the setup for "Buffer cache is not scan resistant" > thread: > > Firstly vanilla 8.3 from 20070310: > > Shared Buffers Elapsed vmstat IO rate > -------------- ------- -------------- > 400MB 101 s 122 MB/s > 128KB 79 s 155 MB/s [1] > > Now apply cycle scan v2: > > Shared Buffers Scan_recycle_buffers Elapsed vmstat IO rate > -------------- -------------------- ------- ------------- > 400MB 0 101 s 122 MB/s > 400MB 8 78 s 155 MB/s > 400MB 16 77 s 155 MB/s > 400MB 32 78 s 155 MB/s > 400MB 64 82 s 148 MB/s > 400MB 128 93 s 128 MB/s > > Certainly seems to have the desired effect! > > Cheers > > Mark > > [1] I'm not seeing 166 MB/s like previous 8.2.3 data, however 8.3 PGDATA > is located further toward the end of the disk array - which I suspect is > limiting the IO rate a little. That's good news, thanks very much for testing that. Before we can claim success, we need a few more tests on VACUUM, COPY and a null test case to show it doesn't effect typical workloads, except to improve vacuuming. I'll see if we can arrange those at EDB on a reasonable size system. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Where is the final version of this patch? What patches are stuck in the patch moderator queue? --------------------------------------------------------------------------- Simon Riggs wrote: > On Sat, 2007-03-10 at 07:59 +0000, Simon Riggs wrote: > > On Fri, 2007-03-09 at 18:05 -0500, Tom Lane wrote: > > > "Simon Riggs" <simon@2ndquadrant.com> writes: > > > > On Fri, 2007-03-09 at 16:45 -0500, Tom Lane wrote: > > > >> We do that anyway; but certainly Simon's patch ought not be injecting > > > >> an additional one. > > > > > > > It should be possible to pass that down from the planner to the > > > > executor, in certain cases. > > > > > > Huh? See HeapScanDesc->rs_nblocks. > > > > Many thanks. > > New patch enclosed, implementation as you've requested. > > Not ready to apply yet, but good for testing. > > COPY command now also uses this hint, to allow test results and > discussion. Others could also, perhaps needing different values. > > -- > Simon Riggs > EnterpriseDB http://www.enterprisedb.com > [ Attachment, skipping... ] > > ---------------------------(end of broadcast)--------------------------- > TIP 6: explain analyze is your friend -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Mon, 2007-04-02 at 19:10 -0400, Bruce Momjian wrote: > Where is the final version of this patch? What patches are stuck in the > patch moderator queue? We already discussed the dependency that exists with this patch and you accepted that. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Simon Riggs wrote: > On Mon, 2007-04-02 at 19:10 -0400, Bruce Momjian wrote: > > > Where is the final version of this patch? What patches are stuck in the > > patch moderator queue? > > We already discussed the dependency that exists with this patch and you > accepted that. Oh, that was the patch. I forgot. I am getting confused over which patches are finished by the authors, and which are on hold because of merge issues or open community discussion issues. Rather than ask if patches are "completed", I think "finished" is a better word, meaning the author has finished working on it, and it now up to the community on how to proceed. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +