Re: Transaction Exception Question - Mailing list pgsql-general

From Andrew Sullivan
Subject Re: Transaction Exception Question
Date
Msg-id 20020814141205.R15973@mail.libertyrms.com
Whole thread Raw
In response to Re: Transaction Exception Question  (Jon Swinth <jswinth@atomicpc.com>)
Responses Re: Transaction Exception Question
List pgsql-general
On Wed, Aug 14, 2002 at 08:50:32AM -0700, Jon Swinth wrote:
>
> In the example I gave, the record is already there but the second client
> cannot see it yet (not commited) so it attempts an insert too.  If the first
> client is successful and commits then the second client will get an SQL error
> on insert for duplicate key.  In Postgre currently this required that the
> second client rollback everything in the transaction when it would be a
> simple matter to catch the duplicate key error, select back the record, and
> update it.

Could you cache the locally-submitted bits from previously in the
transaction, and then resubmit them as part of a new transaction?  I
know that's not terribly efifcient, but if you _really_ need
transactions running that long, it may be the only way until
savepoints are added.

I wonder, however, if this isn't one of those cases where proper
theory-approved normalisation is the wrong way to go.  Maybe you need
an order-submission queue table to keep contention low on the
(products?  I think that was your example) table.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


pgsql-general by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: cvs postgresql current lacks 'ksqo' ? odbc/pgadmin does
Next
From: Andrew Sullivan
Date:
Subject: Re: Evaluating PostgreSQL for Production Web App