Re: profiling connection overhead - Mailing list pgsql-hackers

From Andres Freund
Subject Re: profiling connection overhead
Date
Msg-id 201011242206.12964.andres@anarazel.de
Whole thread Raw
In response to Re: profiling connection overhead  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: profiling connection overhead
List pgsql-hackers
On Wednesday 24 November 2010 21:54:53 Robert Haas wrote:
> On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund <andres@anarazel.de> wrote:
> > On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
> >> On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> > Robert Haas <robertmhaas@gmail.com> writes:
> >> >> Full results, and call graph, attached.  The first obvious fact is
> >> >> that most of the memset overhead appears to be coming from
> >> >> InitCatCache.
> >> > 
> >> > AFAICT that must be the palloc0 calls that are zeroing out (mostly)
> >> > the hash bucket headers.  I don't see any real way to make that
> >> > cheaper other than to cut the initial sizes of the hash tables (and
> >> > add support for expanding them later, which is lacking in catcache
> >> > ATM).  Not convinced that that creates any net savings --- it might
> >> > just save some cycles at startup in exchange for more cycles later,
> >> > in typical backend usage.
> >> > 
> >> > Making those hashtables expansible wouldn't be a bad thing in itself,
> >> > mind you.
> >> 
> >> The idea I had was to go the other way and say, hey, if these hash
> >> tables can't be expanded anyway, let's put them on the BSS instead of
> >> heap-allocating them.  Any new pages we request from the OS will be
> >> zeroed anyway, but with palloc we then have to re-zero the allocated
> >> block anyway because palloc can return a memory that's been used,
> >> freed, and reused.  However, for anything that only needs to be
> >> allocated once and never freed, and whose size can be known at compile
> >> time, that's not an issue.
> >> 
> >> In fact, it wouldn't be that hard to relax the "known at compile time"
> >> constraint either.  We could just declare:
> >> 
> >> char lotsa_zero_bytes[NUM_ZERO_BYTES_WE_NEED];
> >> 
> >> ...and then peel off chunks.
> > 
> > Won't this just cause loads of additional pagefaults after fork() when
> > those pages are used the first time and then a second time when first
> > written to (to copy it)?
> 
> Aren't we incurring those page faults anyway, for whatever memory
> palloc is handing out?  The heap is no different from bss; we just
> move the pointer with sbrk().
Yes, but only once. Also scrubbing a page is faster than copying it... (and 
there were patches floating around to do that in advance, not sure if they got 
integrated into mainline linux)

Andres


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: profiling connection overhead
Next
From: Tom Lane
Date:
Subject: Re: profiling pgbench