Re: Exponential behaviour with UPDATE collisions? - Mailing list pgsql-admin

From Tom Lane
Subject Re: Exponential behaviour with UPDATE collisions?
Date
Msg-id 6097.1085205852@sss.pgh.pa.us
Whole thread Raw
In response to Exponential behaviour with UPDATE collisions?  ("David F. Skoll" <dfs@roaringpenguin.com>)
List pgsql-admin
"David F. Skoll" <dfs@roaringpenguin.com> writes:
> I would expect that if N clients are all trying to update the same row,
> that the N updates would be serialized and run in O(N) time.

Possibly so; but your test case isn't a pure illustration of an O(N)
situation.  You are measuring the time for the mainline to run 50
updates of a row (as fast as it can) while N other processes are
updating the same row as fast as they can.  The run will presumably
generate somewhere around N*50 dead rows with the same key.  Since
you're not doing any vacuums meanwhile, later updates will have to scan
through lots of dead rows looking for the one live row.  In other words,
there's an O(N^2) behavior here that would occur without any parallelism
at all, just from the number of dead rows you're leaving behind.

> Is this the expected behaviour?  Is it considered a problem?

It's the price we pay for MVCC semantics.  If you have an application in
which the bottleneck is the number of updates that can be done on a
single row, you might be better off with a non-MVCC database.

            regards, tom lane

pgsql-admin by date:

Previous
From: "Paul Gimpelj"
Date:
Subject: database design tools.
Next
From: Steve Lane
Date:
Subject: Need someone experienced in web/database scaling [OFF]