Re: On duplicate ignore - Mailing list pgsql-general

From Lincoln Yeoh
Subject Re: On duplicate ignore
Date
Msg-id 20120119191545.0C5D843CEB9@mail.postgresql.org
Whole thread Raw
In response to Re: On duplicate ignore  (Florian Weimer <fweimer@bfk.de>)
List pgsql-general
At 10:54 PM 1/19/2012, Florian Weimer wrote:
>* Gnanakumar:
>
> >> Just create a unique index on EMAIL column and handle error if it comes
> >
> > Thanks for your suggestion.  Of course, I do understand that this could be
> > enforced/imposed at the database-level at any time.  But I'm trying to find
> > out whether this could be solved at the application layer itself.  Any
> > thoughts/ideas?
>
>If you use serializable transactions in PostgreSQL 9.1, you can
>implement such constraints in the application without additional
>locking.  However, with concurrent writes and without an index, the rate
>of detected serialization violations and resulting transactions aborts
>will be high.

Would writing application-side code to handle those transaction
aborts in 9.1 be much easier than writing code to handle transaction
aborts/DB exceptions due to unique constraint violations? These
transaction aborts have to be handled differently (e.g. retried for X
seconds/Y tries) from other sort of transaction aborts (not retried).

Otherwise I don't see the benefit of this feature for this scenario.
Unless of course you get significantly better performance by not
having a unique constraint.

If insert performance is not an issue and code simplicity is
preferred, one could lock the table (with an exclusive lock mode),
then do the selects and inserts, that way your code can assume that
any transaction aborts are due to actual problems rather than
concurrency. Which often means less code to write :).

Regards,
Link.





pgsql-general by date:

Previous
From: "David Johnston"
Date:
Subject: Re: How to improve my slow query for table have list of child table?
Next
From: Heine Ferreira
Date:
Subject: how to make text fields accent insensitive?