Hi all,
I've a question regarding unique constraints, which I've tried to describe in
general terms, to keep things simple.
I've working on an application that, amongst other things, may add a row to a
table. This table has a primary key defined over two (of the three) fields,
which forces the combined value to be unique.
An end-user of this system can cause the application to add a row to a table,
based on data supplied by that end-user (a create-like command). It can
happen that end-users repeat themselves: rerunning the same activity with the
same data. Logically, repeating the activity doesn't make sense; the
application should fail the second (and all subsequent) attempts with a
meaningful error message.
When writing the application, there was a deliberate design decision: rather
than a read-modify-write cycle, with the corresponding overhead of locking and
the resulting serialisation, the software simply lets the INSERT fail (due to
the primary-key uniqueness constraint). If this happens, the transaction is
rolled back and an error message returned.
The software can identify whether the problem is due to an end-user repeating
an earlier action by looking at the class code from the SQL error ("23" ==
Constraint Violation). This allows us to return the correct error.
This works fine: the correct error message is reported and the system behaves
as it should. There's one problem: if a user repeats their activity then
PostgreSQL logs the corresponding constraint violation:
ERROR: duplicate key value violates unique constraint [..]
The log files may contain many such messages, depending on the usage-pattern
of the end-user. Including all these messages in the log file is distracting.
The question is: can we suppress the logging of these message .. but allow
other error messages to be logged normally?
Cheers,
Paul.