Re: Reducing relation locking overhead - Mailing list pgsql-hackers

From Jim C. Nasby
Subject Re: Reducing relation locking overhead
Date
Msg-id 20051208060459.GJ16053@nasby.net
Whole thread Raw
In response to Re: Reducing relation locking overhead  (Greg Stark <gsstark@mit.edu>)
List pgsql-hackers
On Fri, Dec 02, 2005 at 03:25:58PM -0500, Greg Stark wrote:
> Postgres would have no trouble building an index of the existing data using
> only shared locks. The problem is that any newly inserted (or updated) records
> could be missing from such an index.
> 
> To do it you would then have to gather up all those newly inserted records.
> And of course while you're doing that new records could be inserted. And so
> on. There's no guarantee it would ever finish, though I suppose you could
> detect the situation if the size of the new batch wasn't converging to 0 and
> throw an error.

Why throw an error? Just grab a lock that would prevent any new inserts
from occuring. Or at least make that an option.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Vertical Partitioning with TOAST
Next
From: "Jim C. Nasby"
Date:
Subject: Re: Reducing relation locking overhead