Hi,
A bit theoretical question (sorry for spelling and maybe OT).
...
> > It seems that our current way of enforcing uniqueness knows nothing
> > about transactions ;(
...
> > create table t(i int4 primary key);
...
> > begin;
> > delete from t where i=1;
> > insert into t(i) values(1);
> > end;
> >
> > in a loop from two parallel processes in a loop then one of them will
> > almost instantaneously err out with
> >
> > ERROR: Cannot insert a duplicate key into unique index t_pkey
*I think* this is correct behaviour, ie all that one transaction does should
be visible to other transactions.
But then a question: How is this handled by PostgreSQL? (two parallel
threads, a row where t=1 allready exist):
begin; // << Thread 1delete from t where i=1;
// Now thread 1 does a lot of other stuff...// and while its working another thread starts doing its stuff
begin; // << Thread 2insert into t(i) values(1);
commit; // << Thread 2 is done, and all should be swell
// What happens here ????????????
rollback; // << Thread 1 regrets its delete???????????
// Jarmo