Re: postgresql and process titles - Mailing list pgsql-hackers

From Jim C. Nasby
Subject Re: postgresql and process titles
Date
Msg-id 20060614202154.GK34196@pervasive.com
Whole thread Raw
In response to Re: postgresql and process titles  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: postgresql and process titles  (Martijn van Oosterhout <kleptog@svana.org>)
Re: postgresql and process titles  (Greg Stark <gsstark@mit.edu>)
List pgsql-hackers
On Wed, Jun 14, 2006 at 03:51:28PM -0400, Tom Lane wrote:
> Greg Stark <gsstark@mit.edu> writes:
> > Tom Lane <tgl@sss.pgh.pa.us> writes:
> >> This sounds good until you think about locking.  It'd be quite
> >> impractical to implement anything as fine-grained as EXPLAIN ANALYZE
> >> this way, because of the overhead involved in taking and releasing
> >> spinlocks.
> 
> > I'm not entirely convinced. The only other process that would be looking at
> > the information would be the statistics accumulator which would only be waking
> > up every 100ms or so. There would be no contention with other backends
> > reporting their info.
> 
> The numbers I've been looking at lately say that heavy lock traffic is
> expensive, particularly on SMP machines, even with zero contention.
> Seems the cache coherency protocol costs a lot even when it's not doing
> anything...

Are there any ways we could avoid the locking?

One idea would be to keep something akin to a FIFO, where the backend
would write records instead of updating/over-writing them, and the
reader process would only read records where there was no risk that they
were still being written. That would mean that the reader would need to
stay at least one record behind the backend, but that's probably
manageable.

The downside is more memory usage, but I think we could limit that by
also keeping track of what record the reader had last read. The backend
would check that, and if the reader was more than X records behind, the
backend would update the most recent record it had written, instead of
writing a new one. That would place an effective limit on how much
memory was consumed.

But... I have no idea how exactly shared memory works, so maybe this
plan is fundamentally broken. But hopefully there's some way to handle
the locking problems, because a seperate reader process does sound like
an interesting possibility.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Alternative variable length structure
Next
From: Martijn van Oosterhout
Date:
Subject: Re: postgresql and process titles