Re: WIP: generalized index constraints - Mailing list pgsql-hackers

From Greg Stark
Subject Re: WIP: generalized index constraints
Date
Msg-id 407d949e0907060428l47d4e4a3r805159e2443ff178@mail.gmail.com
Whole thread Raw
In response to Re: WIP: generalized index constraints  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: WIP: generalized index constraints
List pgsql-hackers
On Mon, Jul 6, 2009 at 11:56 AM, Simon Riggs<simon@2ndquadrant.com> wrote:
> How will you cope with a large COPY? Surely there can be more than one
> concurrent insert from any backend?

He only needs to handle inserts for the period they're actively being
inserted into the index. Once they're in the index he'll find them
using the index scan. In other words this is all a proxy for the way
btree locks index pages while it looks for a unique key violation.

I'm a bit concerned about the use of tid. You might have to look at a
lot of heap pages to check for conflicts. I suppose they're almost
certainly all in shared memory though. Also, it sounds like you're
anticipating the possibility of dead entries in the array but if you
do then you need to store the xmin also to protect against a tuple
that's been vacuumed and had its line pointer reused since. But I
don't see the necessity for that anyways since you can just clean up
the entry on abort.


-- 
greg
http://mit.edu/~gsstark/resume.pdf


pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: WIP: generalized index constraints
Next
From: Itagaki Takahiro
Date:
Subject: ALTER SET DISTINCT vs. Oracle-like DBMS_STATS