Allan Berger wrote:
> Hi all,
>
> I have a genuine novice question. What's the best "postgres way" to
> lock tables in the following work flow circumstances:
>
> A)
> 1) Begin work;
> 2) select max(Id) from table;
> 3) insert into table record with Id=(max+1);
> 4) commit;
>
> I want to be absolutely certain no other user can run this identical
> query concurrently (read the same max(Id)) causing two identical
> records to be built with the same Id=(max+1) between steps 2 and 4.
> This would require locking the entire table with a "Lock table"
> statement between steps 1 and 2, yes? Best syntax?
The usual way to do this kind of stuff is using sequences.
You don't need to lock anything that way - just:
create sequence myseq;
... and then:
select nextval ('myseq');
and use the output with your insert statement - it is guaranteed, that
no other connection will get the same number from the nextval().
Or, if you do not care about what value you insert, and just need it to
be unique, then you can make it even simpler, using serial type:
create table mytable
(
id serial primary key,
data text
);
then insert statement like
insert into mytable (data) values ('dima');
will automatically generate a unique id and put it into the new row for you.
>
>
> B)
> 1) Begin work;
> 2) Select User from table where Id=n;
> 3) If User is null then:
> Update row Id=n to User="me"
> 4) commit;
You don't need to lock the whole table to do this either...
select * from table where id=1 FOR UPDATE;
... is what you are looking for - this will put a lock on the rows that
actually get selected, so that another transaction will not be able to
modify them until you commit or rollback.
I hope, it helps...
Dima