Thread: duplicate key errors in log file

duplicate key errors in log file

From
anj patnaik
Date:
The pg log files apparently log error lines every time a user inserts a duplicate. I implemented a composite primary key and then when I see the exception in my client app I update the row with the recent data.

however, I don't want the log file to fill out with these error messages since it's handled by the client.

is there a way to stop logging certain messages?

Also do any of you use any options to cause log files not to fill up the disk over time?

Thanks,
ap

Re: duplicate key errors in log file

From
Adrian Klaver
Date:
On 11/17/2015 03:33 PM, anj patnaik wrote:
> The pg log files apparently log error lines every time a user inserts a
> duplicate. I implemented a composite primary key and then when I see the
> exception in my client app I update the row with the recent data.
>
> however, I don't want the log file to fill out with these error messages
> since it's handled by the client.
>
> is there a way to stop logging certain messages?

http://www.postgresql.org/docs/9.4/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

log_min_error_statement (enum)

     Controls which SQL statements that cause an error condition are
recorded in the server log. The current SQL statement is included in the
log entry for any message of the specified severity or higher. Valid
values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE,
WARNING, ERROR, LOG, FATAL, and PANIC. The default is ERROR, which means
statements causing errors, log messages, fatal errors, or panics will be
logged. To effectively turn off logging of failing statements, set this
parameter to PANIC. Only superusers can change this setting.


>
> Also do any of you use any options to cause log files not to fill up the
> disk over time?
>
> Thanks,
> ap


--
Adrian Klaver
adrian.klaver@aklaver.com


Re: duplicate key errors in log file

From
Jim Nasby
Date:
On 11/17/15 5:33 PM, anj patnaik wrote:
> The pg log files apparently log error lines every time a user inserts a
> duplicate. I implemented a composite primary key and then when I see the
> exception in my client app I update the row with the recent data.
>
> however, I don't want the log file to fill out with these error messages
> since it's handled by the client.
>
> is there a way to stop logging certain messages?
>
> Also do any of you use any options to cause log files not to fill up the
> disk over time?

Not really. You could do something like SET log_min_messages = PANIC for
that statement, but then you won't get a log for any other errors.

In any case, the real issue is that you shouldn't do this in the client.
I'll bet $1 that your code has race conditions. Even if you got rid of
those, the overhead of the back-and-forth with the database is huge
compared to doing this in the database.

So really you should create a plpgsql function ala example 40-2 at
http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


Re: duplicate key errors in log file

From
Jeff Janes
Date:
On Wed, Nov 18, 2015 at 9:35 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 11/17/15 5:33 PM, anj patnaik wrote:
>>
>> The pg log files apparently log error lines every time a user inserts a
>> duplicate. I implemented a composite primary key and then when I see the
>> exception in my client app I update the row with the recent data.
>>
>> however, I don't want the log file to fill out with these error messages
>> since it's handled by the client.
>>
>> is there a way to stop logging certain messages?
>>
>> Also do any of you use any options to cause log files not to fill up the
>> disk over time?
>
>
> Not really. You could do something like SET log_min_messages = PANIC for
> that statement, but then you won't get a log for any other errors.
>
> In any case, the real issue is that you shouldn't do this in the client.
> I'll bet $1 that your code has race conditions.

But he already knows it has race conditions.  That is why he included
retry logic, to deal with those conditions.


> Even if you got rid of
> those, the overhead of the back-and-forth with the database is huge compared
> to doing this in the database.
>
> So really you should create a plpgsql function ala example 40-2 at
> http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

It is pretty heavy handed to have to learn yet another language just
to deal with some rare condition which can be handled fine with one of
the languages you already know.

I would certainly welcome an optional way to turn off server logging
of errors of a racy nature, while still have things like syntax errors
get logged.

Cheers,

Jeff


Re: duplicate key errors in log file

From
Jim Nasby
Date:
On 11/18/15 2:42 PM, Jeff Janes wrote:
> But he already knows it has race conditions.  That is why he included
> retry logic, to deal with those conditions.

 From the sounds of it there's no retry loop, which means there's still
a race condition (data is deleted after insert fails but before update).

>> >Even if you got rid of
>> >those, the overhead of the back-and-forth with the database is huge compared
>> >to doing this in the database.
>> >
>> >So really you should create a plpgsql function ala example 40-2 at
>> >http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
> It is pretty heavy handed to have to learn yet another language just
> to deal with some rare condition which can be handled fine with one of
> the languages you already know.

Meh. Not like there's that much to learning plpgsql if you already know
plpgsql.

> I would certainly welcome an optional way to turn off server logging
> of errors of a racy nature, while still have things like syntax errors
> get logged.

Really we need much more granular logging control, period. The lack of
it makes RAISE DEBUG in anything but the most trivial environments
completely useless.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com