Carlo: Please see note at the bottom...
On 02/05/13 04:36, Carlo Stonebanks wrote:
> There are no client poolers (unless pgtcl has one I don't know about) so
> this is unlikely.
>
> The trigger is an interesting idea to try if it happens again - I can't keep
> it for long as it is for a massive cache (used to deflect calls to a web
> service) and will bloat the logs pretty quickly.
>
> I have to ask myself, is it more likely that I have discovered some PG
> anomaly in 9.0 that no one has ever noticed, or that the client has
> accidentally launched the process twice and doesn't know it?
>
>
> -----Original Message-----
> From: Merlin Moncure [mailto:mmoncure@gmail.com]
> Sent: May 1, 2013 11:37 AM
> To: Carlo Stonebanks
> Cc: Steven Schlansker; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
>
> On Wed, May 1, 2013 at 7:16 AM, Carlo Stonebanks
> <stonec.register@sympatico.ca> wrote:
>> Very good to know, Steve. We're on 9.0 right now but I will
>> investigate as all the work is for unattended automatic processes
>> which are continuously streaming data from multiple resources and need
>> to resolve these collisions by themselves.
> If it was me, I'd be putting a 'before' statement level trigger on the table
> to raise a warning into the log with the backend pid assuming I could handle
> the volume. There are lots of ways the client could turn out to be wrong,
> for example client side connection poolers (which I tend to hate). Only
> when it's 100% proven this is a single backend case (which none of us really
> believe is the case including you) is further research justified.
>
> merlin
>
>
>
Please do not top post, posting replies at the bottom, or interspersed
with previous comments, is the norm in these lists - and generally mote
useful, as we can read first what you are replying too!
Cheers,
Gavin