Re: duplicate key errors in log file - Mailing list pgsql-general

From Jim Nasby
Subject Re: duplicate key errors in log file
Date
Msg-id 564CB6F3.2090105@BlueTreble.com
Whole thread Raw
In response to duplicate key errors in log file  (anj patnaik <patna73@gmail.com>)
Responses Re: duplicate key errors in log file
List pgsql-general
On 11/17/15 5:33 PM, anj patnaik wrote:
> The pg log files apparently log error lines every time a user inserts a
> duplicate. I implemented a composite primary key and then when I see the
> exception in my client app I update the row with the recent data.
>
> however, I don't want the log file to fill out with these error messages
> since it's handled by the client.
>
> is there a way to stop logging certain messages?
>
> Also do any of you use any options to cause log files not to fill up the
> disk over time?

Not really. You could do something like SET log_min_messages = PANIC for
that statement, but then you won't get a log for any other errors.

In any case, the real issue is that you shouldn't do this in the client.
I'll bet $1 that your code has race conditions. Even if you got rid of
those, the overhead of the back-and-forth with the database is huge
compared to doing this in the database.

So really you should create a plpgsql function ala example 40-2 at
http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


pgsql-general by date:

Previous
From: Karsten Hilbert
Date:
Subject: Re: Taking lot time
Next
From: Adrian Klaver
Date:
Subject: Re: Taking lot time