Thread: concurrent updates problem

concurrent updates problem

From
mwaples@waples.net
Date:
I have on a web application
update threads set views = views + 1 where forum_id = 1 and thread_id =
1
just to record view for that page - stupid I know but people want it.
Problem comes when I use apache's ab to test it.

It just doesnt produce the correct result with 10 or so concurrent
connections. The views ends up being completely out.(no failed queries)
I read up on transaction Isolations , using select for update etc
but can't get it to work correctly - I thought serilizable might do the
trick but only few queries were rolled back -
Can anyone point me in the right direction to solve this ?

Re: concurrent updates problem

From
Gavin Sherry
Date:
You should not be experiencing this if you are using transactions.

Gavin


On Mon, 19 Mar 2001 mwaples@waples.net wrote:

> I have on a web application
> update threads set views = views + 1 where forum_id = 1 and thread_id =
> 1
> just to record view for that page - stupid I know but people want it.
> Problem comes when I use apache's ab to test it.
>
> It just doesnt produce the correct result with 10 or so concurrent
> connections. The views ends up being completely out.(no failed queries)
> I read up on transaction Isolations , using select for update etc
> but can't get it to work correctly - I thought serilizable might do the
> trick but only few queries were rolled back -
> Can anyone point me in the right direction to solve this ?
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>


Re: concurrent updates problem

From
"Richard Huxton"
Date:
From: <mwaples@waples.net>

> I have on a web application
> update threads set views = views + 1 where forum_id = 1 and thread_id =
> 1
> just to record view for that page - stupid I know but people want it.
> Problem comes when I use apache's ab to test it.
>
> It just doesnt produce the correct result with 10 or so concurrent
> connections. The views ends up being completely out.(no failed queries)

So the views total is less than you'd expect.

> I read up on transaction Isolations , using select for update etc
> but can't get it to work correctly - I thought serilizable might do the
> trick but only few queries were rolled back -
> Can anyone point me in the right direction to solve this ?

Can't think what difference it should make how many connections you have.
The update you show will take place in its own transaction so should always
be consistent. The query should block if it needs to wait on another process
updating.

I can only think that you might not be checking for success/failure at some
point in the code. If you can boil it down to a small Perl/PHP script I can
try and duplicate it here - if not, post the relevant code and we'll see if
more eyeballs can't spot something.

- Richard Huxton


Re: concurrent updates problem

From
mwaples@waples.net
Date:
Richard Huxton wrote:
>
> From: <mwaples@waples.net>
>
> > I have on a web application
> > update threads set views = views + 1 where forum_id = 1 and thread_id =
> > 1
> > just to record view for that page - stupid I know but people want it.
> > Problem comes when I use apache's ab to test it.
> >
> > It just doesnt produce the correct result with 10 or so concurrent
> > connections. The views ends up being completely out.(no failed queries)
>
> So the views total is less than you'd expect.

no views are actually more - for 20 access it can be 30 -40 views -
varies all the time, sometimes views is less.



> > I read up on transaction Isolations , using select for update etc
> > but can't get it to work correctly - I thought serilizable might do the
> > trick but only few queries were rolled back -
> > Can anyone point me in the right direction to solve this ?
>
> Can't think what difference it should make how many connections you have.
> The update you show will take place in its own transaction so should always
> be consistent. The query should block if it needs to wait on another process
> updating.
>
> I can only think that you might not be checking for success/failure at some
> point in the code.

I haven't been checking for success or failure - I thought I'll have to
test for failure - but the views were greater than then they should be.

 If you can boil it down to a small Perl/PHP script I can
> try and duplicate it here - if not, post the relevant code and we'll see if
> more eyeballs can't spot something.
>
> - Richard Huxton

Im using aolserver - but its Im writing it in php later too.
code is below

set sql "BEGIN WORK"
ns_db exec $db $sql

set sql "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE"
ns_db exec $db $sql

set sql "UPDATE threads set views = views + 1 WHERE forum_id = $forum_id
AND thread_id = $thread_id"
ns_db dml $db $sql

set sql "COMMIT WORK"
ns_db exec $db $sql


I ve also tried just

set sql "UPDATE threads set views = views + 1 WHERE forum_id = $forum_id
AND thread_id = $thread_id"
ns_db dml $db $sql


they both produces sometimes correct results, sometimes under what the
result should be and sometimes over it.
But 5 or 6 concurrent requests always produce the correct results, any
more than that things go wrong.

Re: concurrent updates problem

From
Richard Huxton
Date:
mwaples@waples.net wrote:
>
> Richard Huxton wrote:
> >
> > From: <mwaples@waples.net>
> >
> > > I have on a web application
> > > update threads set views = views + 1 where forum_id = 1 and thread_id =
> > > 1

> Im using aolserver - but its Im writing it in php later too.
> code is below

[snipped code that looked simple enough to a non tcl-er]
>
> they both produces sometimes correct results, sometimes under what the
> result should be and sometimes over it.
> But 5 or 6 concurrent requests always produce the correct results, any
> more than that things go wrong.

Your code looks fine (it's only sql="..." and exec $sql so there can't
be anything wrong with it).

The only things I can think of are:
1. bug with pg+aolserver (unlikely I think openACS uses that combo). The
only peculiarity I know with aolsvr is that it's multi-threading, and I
don't know if that could cause this.

2. you aren't actually fetching the number of pages you think you are.
It could be that calls are failing and being redone without ab telling
you. Not sure how to tell this without counting packets on the
wire/turning logging to max on the webserver.

- Richard Huxton

Re: concurrent updates problem

From
Tom Lane
Date:
>> I have on a web application
>> update threads set views = views + 1 where forum_id = 1 and thread_id = 1

It should work to do

begin;
select * from threads where forum_id = 1 and thread_id = 1 FOR UPDATE;
update threads set views = views + 1 where forum_id = 1 and thread_id = 1;
end;

Note the FOR UPDATE to lock the row and the transaction wrapping to
define the scope of the lock.  Without this I'd expect you to lose
some counter increments as a result of two processes doing the UPDATE
at about the same time (both will read the old value of "views" and
increment it by one).

            regards, tom lane

Re: concurrent updates problem

From
Tom Lane
Date:
mwaples@waples.net writes:
> Im using aolserver - but its Im writing it in php later too.
> code is below

> set sql "BEGIN WORK"
> ns_db exec $db $sql

> set sql "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE"
> ns_db exec $db $sql

> set sql "UPDATE threads set views = views + 1 WHERE forum_id = $forum_id
> AND thread_id = $thread_id"
> ns_db dml $db $sql

> set sql "COMMIT WORK"
> ns_db exec $db $sql

Are you sure that ns_db exec isn't "helpfully" doing a commit after each
command?

> But 5 or 6 concurrent requests always produce the correct results, any
> more than that things go wrong.

Sounds like you might be running into some sort of number-of-concurrent-
connections limit in AOLserver.  I'm not sure what happens when there
are too many threads for the available number of database connections in
AOLserver, but this example suggests that it's not good.  Anyone know?

            regards, tom lane

Re: concurrent updates problem

From
mwaples@waples.net
Date:
[snip junk]
> [snipped code that looked simple enough to a non tcl-er]
> >
> > they both produces sometimes correct results, sometimes under what the
> > result should be and sometimes over it.
> > But 5 or 6 concurrent requests always produce the correct results, any
> > more than that things go wrong.
>
> Your code looks fine (it's only sql="..." and exec $sql so there can't
> be anything wrong with it).
>
> The only things I can think of are:
> 1. bug with pg+aolserver (unlikely I think openACS uses that combo). The
> only peculiarity I know with aolsvr is that it's multi-threading, and I
> don't know if that could cause this.
>
> 2. you aren't actually fetching the number of pages you think you are.
> It could be that calls are failing and being redone without ab telling
> you. Not sure how to tell this without counting packets on the
> wire/turning logging to max on the webserver.
>
> - Richard Huxton

Im guessing 2.
I just tried using httperf with --num-conn 50 ra 50
My counter went up by 50 every time - and httperf reported around 25
concurrent requests.

Is there a good way to test out concurrent queries,  and see if they are
working as expected ?
ab doesnt seem to be reporting right - although httperf gives the
results i expect.

Re: concurrent updates problem

From
Jan Ploski
Date:
On Mon, Mar 19, 2001 at 11:12:01AM -0500, Tom Lane wrote:
> >> I have on a web application
> >> update threads set views = views + 1 where forum_id = 1 and thread_id = 1
>
> It should work to do
>
> begin;
> select * from threads where forum_id = 1 and thread_id = 1 FOR UPDATE;
> update threads set views = views + 1 where forum_id = 1 and thread_id = 1;
> end;
>
> Note the FOR UPDATE to lock the row and the transaction wrapping to
> define the scope of the lock.  Without this I'd expect you to lose
> some counter increments as a result of two processes doing the UPDATE
> at about the same time (both will read the old value of "views" and
> increment it by one).

But the one-line version without select for update is equivalent, right?
That is, a single UPDATE statement is atomic? According to my test
(with 3 Java threads making a total of 15000 updates that way), it is:
no single update lost in the process.

-JPL

Re: concurrent updates problem

From
Tom Lane
Date:
Jan Ploski <jpljpl@gmx.de> writes:
>> Note the FOR UPDATE to lock the row and the transaction wrapping to
>> define the scope of the lock.  Without this I'd expect you to lose
>> some counter increments as a result of two processes doing the UPDATE
>> at about the same time (both will read the old value of "views" and
>> increment it by one).

> But the one-line version without select for update is equivalent, right?
> That is, a single UPDATE statement is atomic?

No, not really.

What will happen in the case of a conflict is that the second process to
try to update the tuple will wait for the first to commit, then throw
away its computed update tuple and try to re-evaluate the query plan for
the tuple at issue.  I consider the code that does this (EvalPlanQual
in execMain.c) to be a completely untrustworthy hack.  It probably works
in really simple query plans like the given example, but heaven help you
if you've got joins, subplans, aggregates, nextval() calls, etc.  To say
nothing of rules or triggers, which will see none of this.

If you want something you can actually have some confidence in, I'd
recommend the FOR UPDATE approach.

            regards, tom lane

Re: concurrent updates problem

From
Jan Ploski
Date:
On Mon, Mar 19, 2001 at 02:00:03PM -0500, Tom Lane wrote:
> > But the one-line version without select for update is equivalent, right?
> > That is, a single UPDATE statement is atomic?
>
> No, not really.
>
> What will happen in the case of a conflict is that the second process to
> try to update the tuple will wait for the first to commit, then throw
> away its computed update tuple and try to re-evaluate the query plan for
> the tuple at issue.

Which leads to correct results (updates becoming serialized), doesn't it?

> I consider the code that does this (EvalPlanQual
> in execMain.c) to be a completely untrustworthy hack.

Oh no, don't tell us the grave truth ;)

> It probably works in really simple query plans like the given example, but
> heaven help you if you've got joins, subplans, aggregates, nextval() calls,
> etc.  To say nothing of rules or triggers, which will see none of this.

I have to admit not understanding this devil's work (not intended to offend
any of the developers!) in detail, but I have the feeling it should be
mentioned in some prominent place in documentation. I recall from reading
MySQL manuals that they recommended the UPDATE SET x=x+1 type of queries
as a simple way to avoid transactions in some contexts. Now it looks that
with PostgreSQL the matter is more complicated, perhaps due to a different
policy of row-level locking? Which makes me wonder how many other types
of queries that one would think of as atomic at first have such hidden
gotchas? Can you recommend a rule that would guard against this kind of
errors? Obviously, my naive "each SQL statement = atomic, multiple
statements = atomic if grouped into a transaction" does not suffice!

Thanks in advance for sharing some more light on it (I read the chapter
about multi-version concurrency, but it kind of lacks examples that
would make the real-world dangers of loosing consistency clear).

-JPL

RE: concurrent updates problem

From
"Hiroshi Inoue"
Date:
> -----Original Message-----
> From: Tom Lane
>
> >> I have on a web application
> >> update threads set views = views + 1 where forum_id = 1 and
> thread_id = 1
>
> It should work to do
>
> begin;
> select * from threads where forum_id = 1 and thread_id = 1 FOR UPDATE;
> update threads set views = views + 1 where forum_id = 1 and thread_id = 1;
> end;
>
> Note the FOR UPDATE to lock the row and the transaction wrapping to
> define the scope of the lock.

ISTM this is not the case that 'FOR UPDATE' is required.

> Without this I'd expect you to lose
> some counter increments as a result of two processes doing the UPDATE
> at about the same time (both will read the old value of "views" and
> increment it by one).
>

If this is true, it's really a serious bug and must be fixed.

regards,
Hiroshi Inoue


Re: concurrent updates problem

From
Richard Huxton
Date:
Tom Lane wrote:
>
> begin;
> select * from threads where forum_id = 1 and thread_id = 1 FOR UPDATE;
> update threads set views = views + 1 where forum_id = 1 and thread_id = 1;
> end;
>
> Note the FOR UPDATE to lock the row and the transaction wrapping to
> define the scope of the lock.  Without this I'd expect you to lose
> some counter increments as a result of two processes doing the UPDATE
> at about the same time (both will read the old value of "views" and
> increment it by one).

Tom - Surely the update runs in its own transaction and will only ever
update a previously consistent value? If there is another transaction
concurrently updating it should block until the results are available?

I can see what you're getting at but for the example query it shouldn't
be necessary if I understand this right.

- Richard Huxton

pg_dump problem

From
Marek PUBLICEWICZ
Date:
Hello,

    I've had a problem during invoking the pg_dump program. I get the
following result:

NOTICE: get_groname: group 2 not found
getTables(): SELECT failed. Explanation from backend: 'pqReadData() --
backend closed the channel unexptectedly

The point is that everything used to work fine, and I don't know actually
when and why this error ocurred.

I did the vacuum a few times but it didn't help. The error occurs also,
when I try to dump a single table - even if it has just been created as a
new one.

What can be the reason and what should I do to make a backup of the
database?

                    thanks in advance

                        mark