Hello,
Client is getting the following error when attempting
to do an insert on a table:
ERROR: duplicate key violates unique constraint
event_tbl_evt_id_key
Client is using PostgreSQL 7.4.2 on Sparcv9 running
Solaris.
We have the following tables:
EVENT_TBL
evt_id bigserial, unique
d1 numeric(13)
obj_id numeric(6)
d2 numeric(13)
val varchar(22)
correction numeric(1)
delta numeric(13)
CONTROL_TBL
obj_id numeric(6), unique
name varchar(22), unique
dtype numeric(2)
dfreq numeric(2)
Indexes:
EVENT_TBL.d1 (non-clustered)
EVENT_TBL.obj_id (non-clustered)
CONTROL_TBL.obj_id (non-clustered)
CONTROL_TBL.name (clustered)
Update processes run continually throughout the day in
which rows are inserted. Rows are purged by a
different process on a daily basis. Rows are
added/purged in a queue-like manner, with older rows
(i.e. lower 'evt_id' value) purged first.
The EVENT_TBL is potentially large (on the order of
millions of rows) but certainly not big enough where a
'bigserial' value would ever wrap back to 0.
We also drop all the indexes listed above and recreate
them on a daily basis. However, we do not do anything
to the sequence or index created for the 'evt_id'
column.
In fact, we do not ever try to do anything directly to
'evt_id' column. Even on inserts we are not
specifying a value for 'evt_id' and let the system do
its magic. We never attempt to minipulate the
sequence/index for the column in any way.
Short of the client mucking around with the column in
an unauthorized manner, can anybody think of a reason
how things might get confused and an already used
'evt_id' value is used again?
Regards,
Bill
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com