Re: Concurrency issue under very heay loads - Mailing list pgsql-general

From Bill Moran
Subject Re: Concurrency issue under very heay loads
Date
Msg-id 20090716071149.3376e5aa.wmoran@potentialtech.com
Whole thread Raw
In response to Concurrency issue under very heay loads  ("Raji Sridar (raji)" <raji@cisco.com>)
List pgsql-general
"Raji Sridar (raji)" <raji@cisco.com> wrote:
>
> We use a typical counter within a transaction to generate order sequence number and update the next sequence number.
Thisis a simple next counter - nothing fancy about it.  When multiple clients are concurrently accessing this table and
updatingit, under extermely heavy loads in the system (stress testing), we find that the same order number is being
generatedfor multiple clients. Could this be a bug? Is there a workaround? Please let me know. 

As others have said: using a sequence/serial is best, as long as you can
deal with gaps in the generated numbers.  (note that in actual practice,
the number of gaps is usually very small.)

Without seeing the code, here's my guess as to what's wrong:
You take out a write lock on the table, then acquire the next number, then
release the lock, _then_ insert the new row.  Doing this allows a race
condition between number generation and insertion which could allow
duplicates.

Am I right?  Did I guess it?

If so, you need to take out the lock on the table and hold that lock until
you've inserted the new row.

If none of these answers help, you're going to have to show us your code,
or at least a pared down version that exhibits the problem.

[I'm stripping off the performance list, as this doesn't seem like a
performance question.]

--
Bill Moran
http://www.potentialtech.com

pgsql-general by date:

Previous
From: Janning Vygen
Date:
Subject: suggestion: log_statement = sample
Next
From: Pavel Stehule
Date:
Subject: Re: Getting list of tables used within a query