I'm new to Postgres, so sorry if this is easy, but I did search the docs
and could find nothing to answer this...
I've got a Mysql DB that I've dumped out and am trying to insert into a
Pg DB, as we transition. There were a few changes I had to do to the
data, but they were easy--except for the auto_increment columns. :-/
After I created the DB, I inserted the data (thousands of inserts) via
psql. All went well. Then I started testing the changed code (Perl)
and when I went to insert, I got a "dup key" error.
It took me awhile to figure out what was going on, but I can recreate
the problem with:
create table test (s serial, i int);
insert into test values (1,1);
insert into test values (2,2);
insert into test values (3,3);
insert into test (i) values (4);
ERROR: Cannot insert a duplicate key into unique index test_s_key
I was expecting the system to realize new "keys" had been inserted, and
so when the "nextval" that implicitly happens on a serial field is run,
it would "know" that it was too small and return "max(s)+1". [FWIW, my
expectations in this area were set by my experience with Informix and
mysql, both do this; not sure if other RDBMs do.]
I realize I can fix this by writting a Perl/DBI script to read the list
of sequences, and do [conceptually]:
get the list of sequences
from the sequence name, retrieve the table name
n=select max(id)+1 from table;
select setval('seq_name',n) from seq_name;
and things will be fine from here after, but surely this is a common
enough problem after a bulk load that there is something already built
in to handle this and I just don't have it configured correctly (or is
this a bug?).
Oh, this on a RH 7.2 system with Pg 7.1.3.
TIA for any help in understanding this better!
Kevin