There are no client poolers (unless pgtcl has one I don't know about) so
this is unlikely.
The trigger is an interesting idea to try if it happens again - I can't keep
it for long as it is for a massive cache (used to deflect calls to a web
service) and will bloat the logs pretty quickly.
I have to ask myself, is it more likely that I have discovered some PG
anomaly in 9.0 that no one has ever noticed, or that the client has
accidentally launched the process twice and doesn't know it?
-----Original Message-----
From: Merlin Moncure [mailto:mmoncure@gmail.com]
Sent: May 1, 2013 11:37 AM
To: Carlo Stonebanks
Cc: Steven Schlansker; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
On Wed, May 1, 2013 at 7:16 AM, Carlo Stonebanks
<stonec.register@sympatico.ca> wrote:
> Very good to know, Steve. We're on 9.0 right now but I will
> investigate as all the work is for unattended automatic processes
> which are continuously streaming data from multiple resources and need
> to resolve these collisions by themselves.
If it was me, I'd be putting a 'before' statement level trigger on the table
to raise a warning into the log with the backend pid assuming I could handle
the volume. There are lots of ways the client could turn out to be wrong,
for example client side connection poolers (which I tend to hate). Only
when it's 100% proven this is a single backend case (which none of us really
believe is the case including you) is further research justified.
merlin