Re: Optimising Foreign Key checks - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: Optimising Foreign Key checks
Date
Msg-id 51AF0941.5000704@2ndQuadrant.com
Whole thread Raw
In response to Re: Optimising Foreign Key checks  (Greg Stark <stark@mit.edu>)
List pgsql-hackers
On 06/05/2013 11:37 AM, Greg Stark wrote:
> On Sat, Jun 1, 2013 at 9:41 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> COMMIT;
>> The inserts into order_line repeatedly execute checks against the same
>> ordid. Deferring and then de-duplicating the checks would optimise the
>> transaction.
>>
>> Proposal: De-duplicate multiple checks against same value. This would
>> be implemented by keeping a hash of rows that we had already either
>> inserted and/or locked as the transaction progresses, so we can use
>> the hash to avoid queuing up after triggers.
>
> Fwiw the reason we don't do that now is that the rows might be later
> deleted within the same transaction (or even the same statement I
> think). If they are then the trigger needs to be skipped for that row
> but still needs to happen for other rows. So you need to do some kind
> of book-keeping to keep track of that. The easiest way was just to do
> the check independently for each row. I think there's a comment about
> this in the code.
A simple counter on each value should also solve this.
Increment for each row, decrement for each deletion,
then run the tests on values where counter is > 0
> I think you're right that this should be optimized because in the vast
> majority of cases you don't end up deleting rows and we're currently
> doing lots of redundant checks. But you need to make sure you don't
> break the unusual case entirely.
>


-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ




pgsql-hackers by date:

Previous
From: Greg Stark
Date:
Subject: Re: Optimising Foreign Key checks
Next
From: Amit Kapila
Date:
Subject: Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]