It seems that our current way of enforcing uniqueness knows nothing
about transactions ;(
when you
create table t( i int4 primary key
);"""
and then run the following query
begin; delete from t where i=1; insert into t(i) values(1);
end;
in a loop from two parallel processes in a loop then one of them will
almost instantaneously err out with
ERROR: Cannot insert a duplicate key into unique index t_pkey
I guess this can be classified as a bug, but I'm not sure how easy it
is to fix it.
-------------
Hannu
I tested it with the followiong python script
#!/usr/bin/python
sql_reinsert_item = """\
begin; delete from t where i=1; insert into t(i) values(1);
end;
"""
def main(): import _pg con = _pg.connect('test') for i in range(500): print '%d. update' % (i+1)
con.query(sql_reinsert_item)
if __name__=='__main__': main()