Re: duplicate key errors in log file - Mailing list pgsql-general

From Jim Nasby
Subject Re: duplicate key errors in log file
Date
Msg-id 56500868.2090900@BlueTreble.com
Whole thread Raw
In response to Re: duplicate key errors in log file  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-general
On 11/18/15 2:42 PM, Jeff Janes wrote:
> But he already knows it has race conditions.  That is why he included
> retry logic, to deal with those conditions.

 From the sounds of it there's no retry loop, which means there's still
a race condition (data is deleted after insert fails but before update).

>> >Even if you got rid of
>> >those, the overhead of the back-and-forth with the database is huge compared
>> >to doing this in the database.
>> >
>> >So really you should create a plpgsql function ala example 40-2 at
>> >http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
> It is pretty heavy handed to have to learn yet another language just
> to deal with some rare condition which can be handled fine with one of
> the languages you already know.

Meh. Not like there's that much to learning plpgsql if you already know
plpgsql.

> I would certainly welcome an optional way to turn off server logging
> of errors of a racy nature, while still have things like syntax errors
> get logged.

Really we need much more granular logging control, period. The lack of
it makes RAISE DEBUG in anything but the most trivial environments
completely useless.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


pgsql-general by date:

Previous
From: Jeff Janes
Date:
Subject: Re: controlling memory management with regard to a specific query (or groups of connections)
Next
From: Jim Nasby
Date:
Subject: Re: postgres zeroization of dead tuples ? i.e scrubbing dead tuples with sensitive data.