Thread: BUG #4401: concurrent updates to a table blocks one update indefinitely

BUG #4401: concurrent updates to a table blocks one update indefinitely

From
"vinayak"
Date:
The following bug has been logged online:

Bug reference:      4401
Logged by:          vinayak
Email address:      vinayak@adconion.com
PostgreSQL version: 8.3.1
Operating system:   CentOS release 5
Description:        concurrent updates to a table blocks one update
indefinitely
Details:

There are two tables
test_source(id int, name text)
test_destination(id int, name text)
which are filled with data. Using an update query the names from the
destination are updated with new values from the source.

A single run of this update works as expected. Concurrent runs cause one to
succeed and the other to be blocked indefinitely.

While the second update is blocked, postgres appears to be consuming
considerable cpu resources.

The following script demonstrates the problem encountered.
Test script:
psql -q -U postgres -c "create table test_destination(id int, name text)";
psql -q -U postgres -c "create table test_source(id int, name text)";
numEntries=10000;
psql -U postgres -c "insert into test_source(id, name) select
generate_series(1,$numEntries), 'new entry #
'||generate_series(1,$numEntries)";
psql -U postgres -c "insert into test_destination(id, name) select
generate_series(1,$numEntries), 'old entry #
'||generate_series(1,$numEntries)";
echo "Test initialization done"

pids=""
psql -U postgres -c "update test_destination d set name=s.name from
test_source s where d.id=s.id" &
pids="$pids $!"
psql -U postgres -c "update test_destination d set name=s.name from
test_source s where d.id=s.id" &
pids="$pids $!"
wait $pids
echo "Updates completed"

psql -q -U postgres -c "drop table test_source";
psql -q -U postgres -c "drop table test_destination";

Re: BUG #4401: concurrent updates to a table blocks one update indefinitely

From
Tom Lane
Date:
"vinayak" <vinayak@adconion.com> writes:
> A single run of this update works as expected. Concurrent runs cause one to
> succeed and the other to be blocked indefinitely.

It's not blocked, it's just doing EvalPlanQual over and over, and that's
quite inefficient in this example.  (It looks like it's using a mergejoin,
so the "s" relation has to be sorted over again for each updated "d"
tuple :-(.)  I don't think anyone's ever tried to optimize EvalPlanQual,
because concurrent updates of the same tuple are usually not common.
But there's definitely room for improvement there.  The code comments
talk about trying to avoid a full restart of the sub-plan, but I wonder
whether it would be worth generating a completely different plan using
the knowledge that we have exactly one row coming from the target
table...

Anyway, don't hold your breath waiting for a performance improvement
here.  You'll need to revise your application to avoid having quite so
many concurrent updates of the same tuples.  Maybe you could use
table-level locks to serialize your full-table update operations?
It's not too clear what the real-world application underlying this
example might have been.

            regards, tom lane