Thread: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hello all,

I am running a Postgresql 9.2 system (IIRC v9.2.4; I am about to upgrade
to 9.2.6) on a system with 32-cores, 256GB RAM, 64GB shared RAM for
postgresql. The applications I am running are written using Django
(currently v1.5)

For a while I have been observing what may be special kind of deadlock, or
perhaps more precisely a lock-unlock loop among a group of postgresql
processes performing similar, possibly overlapping queries.

I am wondering if this is a known issue? If so, are there any workarounds
aside from the ones I am currently using myself? I did not see something
like this listed in the lock section of the Bug Todo list.

I have so far observed this in two specific query scenarios, and the
problem seem to trigger at 10 or more processes performing similar queries
(with 8 processes I do not observe the problem). When this happens, 10 of
the processes (the rest will be in waiting state) will run at 100% CPU
(one full core) without apparently being able to complete the operation.
IIRC I have seen this go on for hours (even for queries that should
complete in fractions of a second) before I have killed the postgres
processes (they are so busy they do not detect that the connection has
been closed by killing the remote process).

In one of the cases, an UPDATE operation, the problem only appear to
happen during the first such updates, when all relevant entries are in the
same state, not later, so I have been able to avoid the problem by making
sure that initially there are no more than 8 active processes working on
the database. IIRC even just killing the processes and restarting the job
without any changes to process counts seem to avoid the problem, but that
does not avoid the initial startup problem. The queries are (supposed to)
start in a random staggered sequence several seconds apart just to avoid
too many operations at the same time.

The second case is a multi-table SELECT, and again running 10 or more
processes seem to trigger the problem, and I have had to reduce the number
of parallel queries to 8 to avoid the issues. In this case it might be
possible to randomize the sequence of queries, but that still could
accidentally trigger a similar 10+ parallel query case, and all of the
queries will always access at least one of the tables. I am investigating
other alternatives for this case.

At present I have not had time to create a testcase for this, but I can
give some details of the UPDATE scenario.

The relevant fields in the table (a job queue list) are the unique
"id"-field and a "state" field with 3 values (Idle, Started, Finished).

The operation have two steps, first a retrieval of a group id's for
currently idle entries, then an update of the records with those IDs to
state started, assuming they are still idle (to avoid multiple allocations
of the same job), returning a list of updated item ids.

The update query looks like this:

   UPDATE queue SET state = E'S' WHERE state = E'I' AND id IN (<list of
integers>) RETURNING id;

There is a BEGIN/COMMIT wrap around the operation, including the SELECT
query.

My guess is that the active processes get into a lock/unlock loop
regarding the "state" field because the list of ids overlap, and for some
reason, instead of completing the operation according to some kind of
priority order.

I have no real guess about what is causing the SELECT operations to loop
in that fashion, since there is no write operations that could cause the
kind of apparent loop I see in the UPDATE operation. A wild guess could be
that the SELECT processes are passing around the operation for creating a
common table join for linking records from two (or more tables), or just
retrieval of records from a single table, since the operation overlap
tables and sub selections from those tables. If this guess have some
bearing on what is going on, it may also be something similar that is
going on in the UPDATE case, e.g. an attempt to collect all the affected
records, and the job just gets passed on to the next process due to some
logic.

Any ideas about what is going on?

--
Using Opera's mail client: http://www.opera.com/mail/


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
Andrew Sullivan
Date:
On Sat, Jan 04, 2014 at 12:14:42PM +0100, Yngve N. Pettersen wrote:
> The update query looks like this:
>
>   UPDATE queue SET state = E'S' WHERE state = E'I' AND id IN (<list
> of integers>) RETURNING id;
>
> There is a BEGIN/COMMIT wrap around the operation, including the
> SELECT query.

Am I right that you're doing the SELECT, pulling that up into the
application, and then that becomes your <list of integers>?  If so, my
first advice is, "Don't do that," because you're making the
transaction run longer (which is probably part of the problem).
Instead, you probably want something like

    UPDATE . . . AND id IN (SELECT . . .LIMIT. . .) RETURNING id;

This won't solve your problem perfectly, though.

> My guess is that the active processes get into a lock/unlock loop
> regarding the "state" field because the list of ids overlap, and for
> some reason, instead of completing the operation according to some
> kind of priority order.

My guess is that you have a "lock inversion", yes, but it's hard to
guess what.  You want to look at the pg_locks view to figure what's
blocking what.  It seems likely that your case is not strictly a
deadlock.  A deadlock is a case where transaction A has a lock on
something that transaction B needs, and needs to wait for a lock on an
object that is locked by transaction B.  Neither one could possibly
complete, and you get a deadlock detection.

There's an additional possibility that is suggested by your
description, and that is that it's not locks at all, but that you're
running into some kind of system issue.  Did you adjust the setting of
sort_mem?  It's a long shot, but it could be that if there are enough
sorts in the SELECT (which you don't show us), you're chewing through
a lot of sort_mem.  Remember that sort_mem is allocated _per sort_, so
it could be that a lot of these allocations fit fine in real memory if
only 8 processes are doing it; but if 10 do, you pass your threshold
for physical memory and start swapping.  I wouldn't expect high CPU
under that case, though, but high I/O.  So I think it's doubtful.

Best regards,

A

--
Andrew Sullivan
ajs@crankycanuck.ca


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hi,

On Sat, 04 Jan 2014 16:23:42 +0100, Andrew Sullivan <ajs@crankycanuck.ca>
wrote:

> On Sat, Jan 04, 2014 at 12:14:42PM +0100, Yngve N. Pettersen wrote:
>> The update query looks like this:
>>
>>   UPDATE queue SET state = E'S' WHERE state = E'I' AND id IN (<list
>> of integers>) RETURNING id;
>>
>> There is a BEGIN/COMMIT wrap around the operation, including the
>> SELECT query.
>
> Am I right that you're doing the SELECT, pulling that up into the
> application, and then that becomes your <list of integers>?  If so, my
> first advice is, "Don't do that," because you're making the
> transaction run longer (which is probably part of the problem).
> Instead, you probably want something like
>
>     UPDATE . . . AND id IN (SELECT . . .LIMIT. . .) RETURNING id;
>
> This won't solve your problem perfectly, though.

I tried that before, but ran into some issues, IIRC a similar looping
problem as this where queries never ended. I split it up in an attempt to
solve that problem.


>> My guess is that the active processes get into a lock/unlock loop
>> regarding the "state" field because the list of ids overlap, and for
>> some reason, instead of completing the operation according to some
>> kind of priority order.
>
> My guess is that you have a "lock inversion", yes, but it's hard to
> guess what.  You want to look at the pg_locks view to figure what's
> blocking what.  It seems likely that your case is not strictly a
> deadlock.  A deadlock is a case where transaction A has a lock on
> something that transaction B needs, and needs to wait for a lock on an
> object that is locked by transaction B.  Neither one could possibly
> complete, and you get a deadlock detection.

That is why I called it a (possible) multiprocess lock/unlock loop, not a
deadlock.

It looks to me like one process starts looking at a task, then realizes
that this can be or are being handled by another process, and passes the
baton, and that this happens so quickly that the processes can keep 100%
CPU

> There's an additional possibility that is suggested by your
> description, and that is that it's not locks at all, but that you're
> running into some kind of system issue.  Did you adjust the setting of
> sort_mem?  It's a long shot, but it could be that if there are enough
> sorts in the SELECT (which you don't show us), you're chewing through

In the select/update case there is no sorting in the query; there is an
offset/limit clause though, number of records retrieved are currently
restricted to <10000 per query (out of 20 million in the active subset).

     SELECT id from queue where state = E'I' and job_id = <integer> offset
<random 200..150000> limit <1-6000>

The offset is to avoid having the processes thread on each others toes and
avoid selecting the same records, in case the order is predictable

There are indexes on on several fields including conditionals on some
state values in combination with job_id

As I recall, there is no sorting in the SELECT case either, although there
is some filtering on some of the tables to restrict what is being returned.

> a lot of sort_mem.  Remember that sort_mem is allocated _per sort_, so
> it could be that a lot of these allocations fit fine in real memory if
> only 8 processes are doing it; but if 10 do, you pass your threshold
> for physical memory and start swapping.  I wouldn't expect high CPU
> under that case, though, but high I/O.  So I think it's doubtful.

However, in the UPDATE case, the looping processes are all UPDATE queries,
no SELECTs involved.

There is no swap on the system (I don't want swapping, precisely because
it then slows down the system), but there is lots of RAM, with 180+GB
available.

I don't have the config file handy at the moment, but as I recall, while I
did adjust the sort_mem variable it was not high enough (IIRC <=512MB) to
consume that much memory in 10 processes even with a lot of sorts. I have
not seen any peaks in application memory during the loop events as I
recall.

Another thing that IMO count against this is that for the UPDATE case, the
system seems to be quite able to handle 50-60 such queries at a time
(inside 10-30 seconds), except the first time the queue is accessed after
the job list has been initialized (and after ANALYZE have been run on the
table, and the table is analyzed 6+ times an hour).



--
Using Opera's mail client: http://www.opera.com/mail/


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
Andrew Sullivan
Date:
On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
> I tried that before, but ran into some issues, IIRC a similar looping
> problem as this where queries never ended. I split it up in an attempt to
> solve that problem.

Pulling the data out into the application and sending it back in won't
improve things.  Exactly the same number of rows need to be visited,
but the way you have it now you have to marshall all the data and ship
it to the application too.  So it's automatically slower.  Indeed,
making it slower might have masked your problem.


> In the select/update case there is no sorting in the query; there is an
> offset/limit clause though, number of records retrieved are currently
> restricted to <10000 per query (out of 20 million in the active subset).
>
>     SELECT id from queue where state = E'I' and job_id = <integer> offset
> <random 200..150000> limit <1-6000>

This could be part of problem.  Are the different threads working on
different job_ids, or is that the same job_id?  If you don't SORT that
query before the OFFSET, then the rows will come back in whatever
order the system likes.  If your random function isn't very good, you
could well be selecting the same rows for updating.  As you increase
the number of workers, the chances for collisions go up --
particularly if you're pulling 6000 things at a go.  Remember that the
other updating workers are affecting the order in which rows are going
to come.

> However, in the UPDATE case, the looping processes are all UPDATE queries,
> no SELECTs involved.

But you said it's all in the same transaction scope.  The lock is a
the transaction scope.

Anyway, what I'd do is try to cause the condition and post the
pg_locks information.  When I've done this in the past, usually the
best thing to do is also to have query logs on for everything (don't
forget to log the pid!) so you can see what the other transaction
you're waiting on touched already.  You can usually find the inversion
that way.  Once you see it, it's always obvious what you've done, in
my experience (and completely mystifying before that, unfortunately).

A

--
Andrew Sullivan
ajs@crankycanuck.ca


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan <ajs@crankycanuck.ca>
wrote:

> On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
>> I tried that before, but ran into some issues, IIRC a similar looping
>> problem as this where queries never ended. I split it up in an attempt
>> to
>> solve that problem.
>
> Pulling the data out into the application and sending it back in won't
> improve things.  Exactly the same number of rows need to be visited,
> but the way you have it now you have to marshall all the data and ship
> it to the application too.  So it's automatically slower.  Indeed,
> making it slower might have masked your problem.

Could be

>
>> In the select/update case there is no sorting in the query; there is an
>> offset/limit clause though, number of records retrieved are currently
>> restricted to <10000 per query (out of 20 million in the active subset).
>>
>>     SELECT id from queue where state = E'I' and job_id = <integer>
>> offset
>> <random 200..150000> limit <1-6000>
>
> This could be part of problem.  Are the different threads working on
> different job_ids, or is that the same job_id?  If you don't SORT that

Same job_id, at least in the current system.

> query before the OFFSET, then the rows will come back in whatever
> order the system likes.

I suspect that a sort operation on (currently) 20+ million rows for every
query for just 6000 (previous version was 1500 entries) would cause quite
a bit more slowness than breaking up the query in two operations, or the
risk of collisions would, because each process would have to load all that
information (even if it is cached).

>> However, in the UPDATE case, the looping processes are all UPDATE
>> queries,
>> no SELECTs involved.
>
> But you said it's all in the same transaction scope.  The lock is a
> the transaction scope.

But the statement it locks/loops on are only UPDATE statements, also in
the processes that are waiting.

> Anyway, what I'd do is try to cause the condition and post the
> pg_locks information.  When I've done this in the past, usually the
> best thing to do is also to have query logs on for everything (don't
> forget to log the pid!) so you can see what the other transaction
> you're waiting on touched already.  You can usually find the inversion
> that way.  Once you see it, it's always obvious what you've done, in
> my experience (and completely mystifying before that, unfortunately).

Will take a look in a few days, probably midweek.


--
Using Opera's mail client: http://www.opera.com/mail/


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hi again,

Sorry about the delay, but an unexpected 3 week trip combined with not
being
able to fix the system's router whose configuration had become corrupted
before I left, meant that I could not perform any testing until this week,
after the router had been fixed and reinstalled.

I just did a test using expanded logging of queries and duration, and
actually saw this happen with just 4 (four) processes, not the 10 I have
seen before.

The relevant parts of the log, as well as an EXPLAIN and table info dump
are attached in a zipfile; the large parts consisting of 6000 to-be-updated
IDs in the UPDATE commands have been removed for clarity (the sequences
were not overlapping, with a numerical distance of at least 80000 from
the other updates, and each sequence was within a range of 7500 IDs)

Background: Immediately before this operation the queue had been set up
with the
command

     INSERT INTO probedata2_probequeue (part_of_run_id, server_id, state)
     SELECT '334' AS part_of_run_id, server_id, E'I' AS state FROM
probedata2_preparedqueueitem
     WHERE part_of_queue_id = '2'

followed by a COMMIT, a process taking ~15 minutes to copy a 23 million
entry table into the queue. After this the total number of entries in the
target table is 70 million, distributed across 3 runs.

Some SELECT count and ANALYZE operations have probably also been performed
after the initialization operation, before processes

Shortly afterward (within a couple of minutes), four processes initiated
the SELECT and UPDATE
sequence I outlined earlier, each operation was performed between 1 and 10
seconds after one of the others, 20 seconds from start to last command
started.

SELECT id FROM probedata2_probequeue
WHERE state = E'I'  AND part_of_run_id = 334
LIMIT 6000 OFFSET 85103

UPDATE probedata2_probequeue SET "state"=E'S'
WHERE "state" = E'I'  AND id IN ( .....)
RETURNING id

The SELECT operations took 21-122ms to complete, while the UPDATEs took
1093000-1120000 ms (~18 minutes, to complete). During this time the
processes were running at 100% CPU. With more processes involved earlier I
recall seeing even longer execution times for the UPDATEs, before I killed
the processes (this is the first time I have seen this kind of situation
be resolved without killing the processes).

For reference, once these 4 commands had completed (the system is set up
to wait until it sees task completed messages from the first processes
that were started, before starting new ones), the next sequence of these
commands took 122ms and 107ms, respectively, and the second took 50ms and
108ms.

Any suggestions for where to investigate further?

I am considering setting up a small example to see if I can reproduce, but
have not had time to do so yet.

On Sat, 04 Jan 2014 20:06:19 +0100, Yngve N. Pettersen
<yngve@spec-work.net> wrote:

> On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan
> <ajs@crankycanuck.ca> wrote:
>
>> On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
>>> I tried that before, but ran into some issues, IIRC a similar looping
>>> problem as this where queries never ended. I split it up in an attempt
>>> to
>>> solve that problem.
>>
>> Pulling the data out into the application and sending it back in won't
>> improve things.  Exactly the same number of rows need to be visited,
>> but the way you have it now you have to marshall all the data and ship
>> it to the application too.  So it's automatically slower.  Indeed,
>> making it slower might have masked your problem.
>
> Could be
>
>>
>>> In the select/update case there is no sorting in the query; there is an
>>> offset/limit clause though, number of records retrieved are currently
>>> restricted to <10000 per query (out of 20 million in the active
>>> subset).
>>>
>>>     SELECT id from queue where state = E'I' and job_id = <integer>
>>> offset
>>> <random 200..150000> limit <1-6000>
>>
>> This could be part of problem.  Are the different threads working on
>> different job_ids, or is that the same job_id?  If you don't SORT that
>
> Same job_id, at least in the current system.
>
>> query before the OFFSET, then the rows will come back in whatever
>> order the system likes.
>
> I suspect that a sort operation on (currently) 20+ million rows for
> every query for just 6000 (previous version was 1500 entries) would
> cause quite a bit more slowness than breaking up the query in two
> operations, or the risk of collisions would, because each process would
> have to load all that information (even if it is cached).
>
>>> However, in the UPDATE case, the looping processes are all UPDATE
>>> queries,
>>> no SELECTs involved.
>>
>> But you said it's all in the same transaction scope.  The lock is a
>> the transaction scope.
>
> But the statement it locks/loops on are only UPDATE statements, also in
> the processes that are waiting.
>
>> Anyway, what I'd do is try to cause the condition and post the
>> pg_locks information.  When I've done this in the past, usually the
>> best thing to do is also to have query logs on for everything (don't
>> forget to log the pid!) so you can see what the other transaction
>> you're waiting on touched already.  You can usually find the inversion
>> that way.  Once you see it, it's always obvious what you've done, in
>> my experience (and completely mystifying before that, unfortunately).
>
> Will take a look in a few days, probably midweek.
>
>


--
Sincerely,
Yngve N. Pettersen

Using Opera's mail client: http://www.opera.com/mail/

Attachment

Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hi again,

I have now had time to do further research about this issue. I have been
able to produce a script (available on request) that reproduces the
problem, even in tables as small as 100 items and using a single thread,
and as a result may have located an area that may cause the problem: A
conditional index.

As mentioned in the attachment to my previous email the table which is
having the problem look like this:

                                       Table "public.probedata2_probequeue"
        Column     |         Type         |
Modifiers
----------------+----------------------+-------------------------------------
    id             | bigint               | not null default
nextval('probedata2_probequeue_id_seq'::regclass)
    part_of_run_id | integer              | not null
    server_id      | integer              | not null
    state          | character varying(1) |
Indexes:
       "probedata2_probequeue_pkey" PRIMARY KEY, btree (id)
       "run_server" UNIQUE CONSTRAINT, btree (part_of_run_id, server_id)
       "probedata2_probequeue_finished" btree (id) WHERE state::text =
'F'::text
       "probedata2_probequeue_run_idle" btree (part_of_run_id) WHERE
state::text = 'I'::text
       "probedata2_probequeue_started" btree (part_of_run_id) WHERE
state::text = 'S'::text
Foreign-key constraints:
       "probedata2_probequeue_part_of_run_id_fkey" FOREIGN KEY
(part_of_run_id) REFERENCES probedata2_proberun(id) DEFERRABLE INITIALLY
DEFERRED
       "probedata2_probequeue_server_id_fkey" FOREIGN KEY (server_id)
REFERENCES probedata2_server(id) DEFERRABLE INITIALLY DEFERRED


In my test database I have been running tests without the
"probedata2_probequeue_run_idle"-equivalent and the other conditional
indexes.

Without the "run_idle"-index the queries (fetch idle candidates, mark as
started, fetch the records) complete in less than a second (<0.9 seconds),
as expected.

*With* the index, the time to complete that operation increases by a
factor ranging from 10-30 times for small sequences and a single thread,
to 1000 times for large sequences (23 million, 4-10 threads), taking up to
20
minutes to complete an update of 6000 rows for each thread, running the
process at 100% CPU the whole time.

The purpose of the index is to provide quick access to the idle items for
a given job, both records and counts. Normally, there will currently be
just a single active job in the table, and at the time the problem is
occurring all entries for the job will be in the index.

As I mentioned last time, the problematic command is be the UPDATE command

       UPDATE probedata2_probequeue SET "state"=E'S'
       WHERE "state" = E'I'  AND id IN ( .....)
       RETURNING id

and I have confirmed that with the built-in Django debug query information
in my test script.


On Sat, 08 Feb 2014 15:57:10 +0100, Yngve N. Pettersen
<yngve@spec-work.net> wrote:

> Hi again,
>
> Sorry about the delay, but an unexpected 3 week trip combined with not
> being
> able to fix the system's router whose configuration had become corrupted
> before I left, meant that I could not perform any testing until this
> week,
> after the router had been fixed and reinstalled.
>
> I just did a test using expanded logging of queries and duration, and
> actually saw this happen with just 4 (four) processes, not the 10 I have
> seen before.
>
> The relevant parts of the log, as well as an EXPLAIN and table info dump
> are attached in a zipfile; the large parts consisting of 6000
> to-be-updated
> IDs in the UPDATE commands have been removed for clarity (the sequences
> were not overlapping, with a numerical distance of at least 80000 from
> the other updates, and each sequence was within a range of 7500 IDs)
>
> Background: Immediately before this operation the queue had been set up
> with the
> command
>
>      INSERT INTO probedata2_probequeue (part_of_run_id, server_id, state)
>      SELECT '334' AS part_of_run_id, server_id, E'I' AS state FROM
> probedata2_preparedqueueitem
>      WHERE part_of_queue_id = '2'
>
> followed by a COMMIT, a process taking ~15 minutes to copy a 23 million
> entry table into the queue. After this the total number of entries in the
> target table is 70 million, distributed across 3 runs.
>
> Some SELECT count and ANALYZE operations have probably also been
> performed
> after the initialization operation, before processes
>
> Shortly afterward (within a couple of minutes), four processes initiated
> the SELECT and UPDATE
> sequence I outlined earlier, each operation was performed between 1 and
> 10
> seconds after one of the others, 20 seconds from start to last command
> started.
>
> SELECT id FROM probedata2_probequeue
> WHERE state = E'I'  AND part_of_run_id = 334
> LIMIT 6000 OFFSET 85103
>
> UPDATE probedata2_probequeue SET "state"=E'S'
> WHERE "state" = E'I'  AND id IN ( .....)
> RETURNING id
>
> The SELECT operations took 21-122ms to complete, while the UPDATEs took
> 1093000-1120000 ms (~18 minutes, to complete). During this time the
> processes were running at 100% CPU. With more processes involved earlier
> I
> recall seeing even longer execution times for the UPDATEs, before I
> killed
> the processes (this is the first time I have seen this kind of situation
> be resolved without killing the processes).
>
> For reference, once these 4 commands had completed (the system is set up
> to wait until it sees task completed messages from the first processes
> that were started, before starting new ones), the next sequence of these
> commands took 122ms and 107ms, respectively, and the second took 50ms and
> 108ms.
>
> Any suggestions for where to investigate further?
>
> I am considering setting up a small example to see if I can reproduce,
> but
> have not had time to do so yet.
>
> On Sat, 04 Jan 2014 20:06:19 +0100, Yngve N. Pettersen
> <yngve@spec-work.net> wrote:
>
>> On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan
>> <ajs@crankycanuck.ca> wrote:
>>
>>> On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
>>>> I tried that before, but ran into some issues, IIRC a similar looping
>>>> problem as this where queries never ended. I split it up in an
>>>> attempt to
>>>> solve that problem.
>>>
>>> Pulling the data out into the application and sending it back in won't
>>> improve things.  Exactly the same number of rows need to be visited,
>>> but the way you have it now you have to marshall all the data and ship
>>> it to the application too.  So it's automatically slower.  Indeed,
>>> making it slower might have masked your problem.
>>
>> Could be
>>
>>>
>>>> In the select/update case there is no sorting in the query; there is
>>>> an
>>>> offset/limit clause though, number of records retrieved are currently
>>>> restricted to <10000 per query (out of 20 million in the active
>>>> subset).
>>>>
>>>>     SELECT id from queue where state = E'I' and job_id = <integer>
>>>> offset
>>>> <random 200..150000> limit <1-6000>
>>>
>>> This could be part of problem.  Are the different threads working on
>>> different job_ids, or is that the same job_id?  If you don't SORT that
>>
>> Same job_id, at least in the current system.
>>
>>> query before the OFFSET, then the rows will come back in whatever
>>> order the system likes.
>>
>> I suspect that a sort operation on (currently) 20+ million rows for
>> every query for just 6000 (previous version was 1500 entries) would
>> cause quite a bit more slowness than breaking up the query in two
>> operations, or the risk of collisions would, because each process would
>> have to load all that information (even if it is cached).
>>
>>>> However, in the UPDATE case, the looping processes are all UPDATE
>>>> queries,
>>>> no SELECTs involved.
>>>
>>> But you said it's all in the same transaction scope.  The lock is a
>>> the transaction scope.
>>
>> But the statement it locks/loops on are only UPDATE statements, also in
>> the processes that are waiting.
>>
>>> Anyway, what I'd do is try to cause the condition and post the
>>> pg_locks information.  When I've done this in the past, usually the
>>> best thing to do is also to have query logs on for everything (don't
>>> forget to log the pid!) so you can see what the other transaction
>>> you're waiting on touched already.  You can usually find the inversion
>>> that way.  Once you see it, it's always obvious what you've done, in
>>> my experience (and completely mystifying before that, unfortunately).
>>
>> Will take a look in a few days, probably midweek.
>>
>>
>
>


--
Sincerely,
Yngve N. Pettersen

Using Opera's mail client: http://www.opera.com/mail/


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hi again,

A further update, and it looks like I have finally been able to "fix" the
problem.

I used gdb to discover where the process is hanging.

As far as I can tell, the processes are looping inside

           ExecScan
calling   ExecQual
calling   ExecEvalScalarArrayOp

ExecScan was apparently called from ExecIndexScan, via ExecProcNode,
ExecModifyTable, ExecProcNode, and standard_ExecutorRun

This operation seems to take 19 minutes for the current test configuration
(23 million entries, updating 6000, 4-10 threads).


The explain for the

       UPDATE queue_queueitems SET "state"=E'S' WHERE "state" = E'I'  AND id
IN (...) ) RETURNING id

queries seem to be

    Update on queue_queueitems  (cost= ... rows=1 width=22)
      ->  Index Scan using queue_queueitems_run_idle on queue_queueitems
(cost= .... rows=1 width=22)
            Filter: (((state)::text = 'I'::text) AND (id = ANY ('{ ....
}'::bigint[])))


Combined with the GDB info this led me to consider the possibility that
the query planner selected the wrong index for use with the UPDATE query,
so I tried adding an index on (id) WHERE "state" = E'I' , and this
immediately reduced the running time to a couple of seconds; remove this
new index, and it was back to 19 minutes run time.

IMO the EXPLAIN for the above query should have been using the
"queue_queueitems_pkey" index for primary filtering, followed by a
sequential filter based on the "state".

It seems, although I have no idea if it is true, that the planner selects
the "run_idle" index because of its condition, and ignores the facts the
indexed column is not related to the query and that the other condition in
the query is on the primary key. The resulting operation appears to be a
full sequential scan of the "run_idle" index's record list for each single
item in the "id" condition, that is it looks like it executes 6000*23
million operations, not 6000 record retrieval operations, or a filter of
6000 on a set of 23 million in a single operation.

I am also starting to suspect that the reason the production systems
starts speeding up after the first group of queries is that the continuous
ANALYZE operations on that table every few minutes eventually downgrades
using the run_idle index for that kind of query, favoring the primary key
index instead.


                                    Table "public.queue_queueitems"
     Column   |         Type         |                           Modifiers
-----------+----------------------+---------------------------------------------------------------
    id        | bigint               | not null default
nextval('queue_queueitems_id_seq'::regclass)
    group_id  | integer              | not null
    target_id | integer              | not null
    state     | character varying(1) |
Indexes:
       "queue_queueitems_pkey" PRIMARY KEY, btree (id)
       "queue_queueitems_group_id" btree (group_id)
       "queue_queueitems_run_idle" btree (group_id) WHERE state::text =
'I'::text
       "queue_queueitems_target_id" btree (target_id)
Foreign-key constraints:
       "queue_queueitems_group_id_fkey" FOREIGN KEY (group_id) REFERENCES
queue_queuerun(id) DEFERRABLE INITIALLY DEFERRED
       "queue_queueitems_target_id_fkey" FOREIGN KEY (target_id) REFERENCES
queue_queuetarget(id) DEFERRABLE INITIALLY DEFERRED



On Mon, 10 Mar 2014 02:41:44 +0100, Yngve N. Pettersen
<yngve@spec-work.net> wrote:

>
> Hi again,
>
> I have now had time to do further research about this issue. I have been
> able to produce a script (available on request) that reproduces the
> problem, even in tables as small as 100 items and using a single thread,
> and as a result may have located an area that may cause the problem: A
> conditional index.
>
> As mentioned in the attachment to my previous email the table which is
> having the problem look like this:
>
>                                        Table
> "public.probedata2_probequeue"
>         Column     |         Type         |
> Modifiers
> ----------------+----------------------+-------------------------------------
>     id             | bigint               | not null default
> nextval('probedata2_probequeue_id_seq'::regclass)
>     part_of_run_id | integer              | not null
>     server_id      | integer              | not null
>     state          | character varying(1) |
> Indexes:
>        "probedata2_probequeue_pkey" PRIMARY KEY, btree (id)
>        "run_server" UNIQUE CONSTRAINT, btree (part_of_run_id, server_id)
>        "probedata2_probequeue_finished" btree (id) WHERE state::text =
> 'F'::text
>        "probedata2_probequeue_run_idle" btree (part_of_run_id) WHERE
> state::text = 'I'::text
>        "probedata2_probequeue_started" btree (part_of_run_id) WHERE
> state::text = 'S'::text
> Foreign-key constraints:
>        "probedata2_probequeue_part_of_run_id_fkey" FOREIGN KEY
> (part_of_run_id) REFERENCES probedata2_proberun(id) DEFERRABLE INITIALLY
> DEFERRED
>        "probedata2_probequeue_server_id_fkey" FOREIGN KEY (server_id)
> REFERENCES probedata2_server(id) DEFERRABLE INITIALLY DEFERRED
>
>
> In my test database I have been running tests without the
> "probedata2_probequeue_run_idle"-equivalent and the other conditional
> indexes.
>
> Without the "run_idle"-index the queries (fetch idle candidates, mark as
> started, fetch the records) complete in less than a second (<0.9
> seconds),
> as expected.
>
> *With* the index, the time to complete that operation increases by a
> factor ranging from 10-30 times for small sequences and a single thread,
> to 1000 times for large sequences (23 million, 4-10 threads), taking up
> to 20
> minutes to complete an update of 6000 rows for each thread, running the
> process at 100% CPU the whole time.
>
> The purpose of the index is to provide quick access to the idle items for
> a given job, both records and counts. Normally, there will currently be
> just a single active job in the table, and at the time the problem is
> occurring all entries for the job will be in the index.
>
> As I mentioned last time, the problematic command is be the UPDATE
> command
>
>        UPDATE probedata2_probequeue SET "state"=E'S'
>        WHERE "state" = E'I'  AND id IN ( .....)
>        RETURNING id
>
> and I have confirmed that with the built-in Django debug query
> information
> in my test script.
>
>
> On Sat, 08 Feb 2014 15:57:10 +0100, Yngve N. Pettersen
> <yngve@spec-work.net> wrote:
>
>> Hi again,
>>
>> Sorry about the delay, but an unexpected 3 week trip combined with not
>> being
>> able to fix the system's router whose configuration had become corrupted
>> before I left, meant that I could not perform any testing until this
>> week,
>> after the router had been fixed and reinstalled.
>>
>> I just did a test using expanded logging of queries and duration, and
>> actually saw this happen with just 4 (four) processes, not the 10 I have
>> seen before.
>>
>> The relevant parts of the log, as well as an EXPLAIN and table info dump
>> are attached in a zipfile; the large parts consisting of 6000
>> to-be-updated
>> IDs in the UPDATE commands have been removed for clarity (the sequences
>> were not overlapping, with a numerical distance of at least 80000 from
>> the other updates, and each sequence was within a range of 7500 IDs)
>>
>> Background: Immediately before this operation the queue had been set up
>> with the
>> command
>>
>>      INSERT INTO probedata2_probequeue (part_of_run_id, server_id,
>> state)
>>      SELECT '334' AS part_of_run_id, server_id, E'I' AS state FROM
>> probedata2_preparedqueueitem
>>      WHERE part_of_queue_id = '2'
>>
>> followed by a COMMIT, a process taking ~15 minutes to copy a 23 million
>> entry table into the queue. After this the total number of entries in
>> the
>> target table is 70 million, distributed across 3 runs.
>>
>> Some SELECT count and ANALYZE operations have probably also been
>> performed
>> after the initialization operation, before processes
>>
>> Shortly afterward (within a couple of minutes), four processes
>> initiated the SELECT and UPDATE
>> sequence I outlined earlier, each operation was performed between 1 and
>> 10
>> seconds after one of the others, 20 seconds from start to last command
>> started.
>>
>> SELECT id FROM probedata2_probequeue
>> WHERE state = E'I'  AND part_of_run_id = 334
>> LIMIT 6000 OFFSET 85103
>>
>> UPDATE probedata2_probequeue SET "state"=E'S'
>> WHERE "state" = E'I'  AND id IN ( .....)
>> RETURNING id
>>
>> The SELECT operations took 21-122ms to complete, while the UPDATEs took
>> 1093000-1120000 ms (~18 minutes, to complete). During this time the
>> processes were running at 100% CPU. With more processes involved
>> earlier I
>> recall seeing even longer execution times for the UPDATEs, before I
>> killed
>> the processes (this is the first time I have seen this kind of situation
>> be resolved without killing the processes).
>>
>> For reference, once these 4 commands had completed (the system is set up
>> to wait until it sees task completed messages from the first processes
>> that were started, before starting new ones), the next sequence of these
>> commands took 122ms and 107ms, respectively, and the second took 50ms
>> and
>> 108ms.
>>
>> Any suggestions for where to investigate further?
>>
>> I am considering setting up a small example to see if I can reproduce,
>> but
>> have not had time to do so yet.
>>
>> On Sat, 04 Jan 2014 20:06:19 +0100, Yngve N. Pettersen
>> <yngve@spec-work.net> wrote:
>>
>>> On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan
>>> <ajs@crankycanuck.ca> wrote:
>>>
>>>> On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
>>>>> I tried that before, but ran into some issues, IIRC a similar looping
>>>>> problem as this where queries never ended. I split it up in an
>>>>> attempt to
>>>>> solve that problem.
>>>>
>>>> Pulling the data out into the application and sending it back in won't
>>>> improve things.  Exactly the same number of rows need to be visited,
>>>> but the way you have it now you have to marshall all the data and ship
>>>> it to the application too.  So it's automatically slower.  Indeed,
>>>> making it slower might have masked your problem.
>>>
>>> Could be
>>>
>>>>
>>>>> In the select/update case there is no sorting in the query; there is
>>>>> an
>>>>> offset/limit clause though, number of records retrieved are currently
>>>>> restricted to <10000 per query (out of 20 million in the active
>>>>> subset).
>>>>>
>>>>>     SELECT id from queue where state = E'I' and job_id = <integer>
>>>>> offset
>>>>> <random 200..150000> limit <1-6000>
>>>>
>>>> This could be part of problem.  Are the different threads working on
>>>> different job_ids, or is that the same job_id?  If you don't SORT that
>>>
>>> Same job_id, at least in the current system.
>>>
>>>> query before the OFFSET, then the rows will come back in whatever
>>>> order the system likes.
>>>
>>> I suspect that a sort operation on (currently) 20+ million rows for
>>> every query for just 6000 (previous version was 1500 entries) would
>>> cause quite a bit more slowness than breaking up the query in two
>>> operations, or the risk of collisions would, because each process
>>> would have to load all that information (even if it is cached).
>>>
>>>>> However, in the UPDATE case, the looping processes are all UPDATE
>>>>> queries,
>>>>> no SELECTs involved.
>>>>
>>>> But you said it's all in the same transaction scope.  The lock is a
>>>> the transaction scope.
>>>
>>> But the statement it locks/loops on are only UPDATE statements, also
>>> in the processes that are waiting.
>>>
>>>> Anyway, what I'd do is try to cause the condition and post the
>>>> pg_locks information.  When I've done this in the past, usually the
>>>> best thing to do is also to have query logs on for everything (don't
>>>> forget to log the pid!) so you can see what the other transaction
>>>> you're waiting on touched already.  You can usually find the inversion
>>>> that way.  Once you see it, it's always obvious what you've done, in
>>>> my experience (and completely mystifying before that, unfortunately).
>>>
>>> Will take a look in a few days, probably midweek.
>>>
>>>
>>
>>
>
>


--
Sincerely,
Yngve N. Pettersen

Using Opera's mail client: http://www.opera.com/mail/


Re: Possible multiprocess lock/unlock-loop problem in Postgresql 9.2

From
"Yngve N. Pettersen"
Date:
Hmmm, just tested with the extra conditional indexes in the production
system, and it still took 19 minutes for the first group of queries (23
million entries, 6000 record updates, 8 processes). Afterwards, there were
no such delays.

Munin reports that during those 19 minutes there were, on average, about
1.4 million index fetched tuples per second. Assuming the ANALYZE
operations are not included in those numbers, this seems to indicate that
each process accessed around 200 million tuples in order to update 6000
rows (36000 tuples for each row; depending on what is counted, I would
have expected this to be in the range of 1 to 100 tuples per updated row).

Now that the system have spun up to 200+ processes, looking up existing
data and adding new data, it is accessing about 1.6 million index fetched
tuples per second. IOW 8 processes doing what should have been a simple
update almost matched the activity of 200+ processes (which are not yet
running at full power).

Any suggestions for where to look?


On Sat, 15 Mar 2014 11:49:36 +0100, Yngve N. Pettersen
<yngve@spec-work.net> wrote:

> Hi again,
>
> A further update, and it looks like I have finally been able to "fix"
> the problem.
>
> I used gdb to discover where the process is hanging.
>
> As far as I can tell, the processes are looping inside
>
>            ExecScan
> calling   ExecQual
> calling   ExecEvalScalarArrayOp
>
> ExecScan was apparently called from ExecIndexScan, via ExecProcNode,
> ExecModifyTable, ExecProcNode, and standard_ExecutorRun
>
> This operation seems to take 19 minutes for the current test
> configuration
> (23 million entries, updating 6000, 4-10 threads).
>
>
> The explain for the
>
>        UPDATE queue_queueitems SET "state"=E'S' WHERE "state" = E'I'
> AND id
> IN (...) ) RETURNING id
>
> queries seem to be
>
>     Update on queue_queueitems  (cost= ... rows=1 width=22)
>       ->  Index Scan using queue_queueitems_run_idle on queue_queueitems
> (cost= .... rows=1 width=22)
>             Filter: (((state)::text = 'I'::text) AND (id = ANY ('{ ....
> }'::bigint[])))
>
>
> Combined with the GDB info this led me to consider the possibility that
> the query planner selected the wrong index for use with the UPDATE
> query, so I tried adding an index on (id) WHERE "state" = E'I' , and
> this immediately reduced the running time to a couple of seconds; remove
> this new index, and it was back to 19 minutes run time.
>
> IMO the EXPLAIN for the above query should have been using the
> "queue_queueitems_pkey" index for primary filtering, followed by a
> sequential filter based on the "state".
>
> It seems, although I have no idea if it is true, that the planner
> selects the "run_idle" index because of its condition, and ignores the
> facts the indexed column is not related to the query and that the other
> condition in the query is on the primary key. The resulting operation
> appears to be a full sequential scan of the "run_idle" index's record
> list for each single item in the "id" condition, that is it looks like
> it executes 6000*23 million operations, not 6000 record retrieval
> operations, or a filter of 6000 on a set of 23 million in a single
> operation.
>
> I am also starting to suspect that the reason the production systems
> starts speeding up after the first group of queries is that the
> continuous ANALYZE operations on that table every few minutes eventually
> downgrades using the run_idle index for that kind of query, favoring the
> primary key index instead.
>
>
>                                     Table "public.queue_queueitems"
>      Column   |         Type         |
> Modifiers
> -----------+----------------------+---------------------------------------------------------------
>     id        | bigint               | not null default
> nextval('queue_queueitems_id_seq'::regclass)
>     group_id  | integer              | not null
>     target_id | integer              | not null
>     state     | character varying(1) |
> Indexes:
>        "queue_queueitems_pkey" PRIMARY KEY, btree (id)
>        "queue_queueitems_group_id" btree (group_id)
>        "queue_queueitems_run_idle" btree (group_id) WHERE state::text =
> 'I'::text
>        "queue_queueitems_target_id" btree (target_id)
> Foreign-key constraints:
>        "queue_queueitems_group_id_fkey" FOREIGN KEY (group_id) REFERENCES
> queue_queuerun(id) DEFERRABLE INITIALLY DEFERRED
>        "queue_queueitems_target_id_fkey" FOREIGN KEY (target_id)
> REFERENCES
> queue_queuetarget(id) DEFERRABLE INITIALLY DEFERRED
>
>
>
> On Mon, 10 Mar 2014 02:41:44 +0100, Yngve N. Pettersen
> <yngve@spec-work.net> wrote:
>
>>
>> Hi again,
>>
>> I have now had time to do further research about this issue. I have been
>> able to produce a script (available on request) that reproduces the
>> problem, even in tables as small as 100 items and using a single thread,
>> and as a result may have located an area that may cause the problem: A
>> conditional index.
>>
>> As mentioned in the attachment to my previous email the table which is
>> having the problem look like this:
>>
>>                                        Table
>> "public.probedata2_probequeue"
>>         Column     |         Type         |
>> Modifiers
>> ----------------+----------------------+-------------------------------------
>>     id             | bigint               | not null default
>> nextval('probedata2_probequeue_id_seq'::regclass)
>>     part_of_run_id | integer              | not null
>>     server_id      | integer              | not null
>>     state          | character varying(1) |
>> Indexes:
>>        "probedata2_probequeue_pkey" PRIMARY KEY, btree (id)
>>        "run_server" UNIQUE CONSTRAINT, btree (part_of_run_id, server_id)
>>        "probedata2_probequeue_finished" btree (id) WHERE state::text =
>> 'F'::text
>>        "probedata2_probequeue_run_idle" btree (part_of_run_id) WHERE
>> state::text = 'I'::text
>>        "probedata2_probequeue_started" btree (part_of_run_id) WHERE
>> state::text = 'S'::text
>> Foreign-key constraints:
>>        "probedata2_probequeue_part_of_run_id_fkey" FOREIGN KEY
>> (part_of_run_id) REFERENCES probedata2_proberun(id) DEFERRABLE INITIALLY
>> DEFERRED
>>        "probedata2_probequeue_server_id_fkey" FOREIGN KEY (server_id)
>> REFERENCES probedata2_server(id) DEFERRABLE INITIALLY DEFERRED
>>
>>
>> In my test database I have been running tests without the
>> "probedata2_probequeue_run_idle"-equivalent and the other conditional
>> indexes.
>>
>> Without the "run_idle"-index the queries (fetch idle candidates, mark as
>> started, fetch the records) complete in less than a second (<0.9
>> seconds),
>> as expected.
>>
>> *With* the index, the time to complete that operation increases by a
>> factor ranging from 10-30 times for small sequences and a single thread,
>> to 1000 times for large sequences (23 million, 4-10 threads), taking up
>> to 20
>> minutes to complete an update of 6000 rows for each thread, running the
>> process at 100% CPU the whole time.
>>
>> The purpose of the index is to provide quick access to the idle items
>> for
>> a given job, both records and counts. Normally, there will currently be
>> just a single active job in the table, and at the time the problem is
>> occurring all entries for the job will be in the index.
>>
>> As I mentioned last time, the problematic command is be the UPDATE
>> command
>>
>>        UPDATE probedata2_probequeue SET "state"=E'S'
>>        WHERE "state" = E'I'  AND id IN ( .....)
>>        RETURNING id
>>
>> and I have confirmed that with the built-in Django debug query
>> information
>> in my test script.
>>
>>
>> On Sat, 08 Feb 2014 15:57:10 +0100, Yngve N. Pettersen
>> <yngve@spec-work.net> wrote:
>>
>>> Hi again,
>>>
>>> Sorry about the delay, but an unexpected 3 week trip combined with not
>>> being
>>> able to fix the system's router whose configuration had become
>>> corrupted
>>> before I left, meant that I could not perform any testing until this
>>> week,
>>> after the router had been fixed and reinstalled.
>>>
>>> I just did a test using expanded logging of queries and duration, and
>>> actually saw this happen with just 4 (four) processes, not the 10 I
>>> have
>>> seen before.
>>>
>>> The relevant parts of the log, as well as an EXPLAIN and table info
>>> dump
>>> are attached in a zipfile; the large parts consisting of 6000
>>> to-be-updated
>>> IDs in the UPDATE commands have been removed for clarity (the sequences
>>> were not overlapping, with a numerical distance of at least 80000 from
>>> the other updates, and each sequence was within a range of 7500 IDs)
>>>
>>> Background: Immediately before this operation the queue had been set
>>> up with the
>>> command
>>>
>>>      INSERT INTO probedata2_probequeue (part_of_run_id, server_id,
>>> state)
>>>      SELECT '334' AS part_of_run_id, server_id, E'I' AS state FROM
>>> probedata2_preparedqueueitem
>>>      WHERE part_of_queue_id = '2'
>>>
>>> followed by a COMMIT, a process taking ~15 minutes to copy a 23 million
>>> entry table into the queue. After this the total number of entries in
>>> the
>>> target table is 70 million, distributed across 3 runs.
>>>
>>> Some SELECT count and ANALYZE operations have probably also been
>>> performed
>>> after the initialization operation, before processes
>>>
>>> Shortly afterward (within a couple of minutes), four processes
>>> initiated the SELECT and UPDATE
>>> sequence I outlined earlier, each operation was performed between 1
>>> and 10
>>> seconds after one of the others, 20 seconds from start to last command
>>> started.
>>>
>>> SELECT id FROM probedata2_probequeue
>>> WHERE state = E'I'  AND part_of_run_id = 334
>>> LIMIT 6000 OFFSET 85103
>>>
>>> UPDATE probedata2_probequeue SET "state"=E'S'
>>> WHERE "state" = E'I'  AND id IN ( .....)
>>> RETURNING id
>>>
>>> The SELECT operations took 21-122ms to complete, while the UPDATEs took
>>> 1093000-1120000 ms (~18 minutes, to complete). During this time the
>>> processes were running at 100% CPU. With more processes involved
>>> earlier I
>>> recall seeing even longer execution times for the UPDATEs, before I
>>> killed
>>> the processes (this is the first time I have seen this kind of
>>> situation
>>> be resolved without killing the processes).
>>>
>>> For reference, once these 4 commands had completed (the system is set
>>> up
>>> to wait until it sees task completed messages from the first processes
>>> that were started, before starting new ones), the next sequence of
>>> these
>>> commands took 122ms and 107ms, respectively, and the second took 50ms
>>> and
>>> 108ms.
>>>
>>> Any suggestions for where to investigate further?
>>>
>>> I am considering setting up a small example to see if I can reproduce,
>>> but
>>> have not had time to do so yet.
>>>
>>> On Sat, 04 Jan 2014 20:06:19 +0100, Yngve N. Pettersen
>>> <yngve@spec-work.net> wrote:
>>>
>>>> On Sat, 04 Jan 2014 19:40:31 +0100, Andrew Sullivan
>>>> <ajs@crankycanuck.ca> wrote:
>>>>
>>>>> On Sat, Jan 04, 2014 at 07:07:08PM +0100, Yngve N. Pettersen wrote:
>>>>>> I tried that before, but ran into some issues, IIRC a similar
>>>>>> looping
>>>>>> problem as this where queries never ended. I split it up in an
>>>>>> attempt to
>>>>>> solve that problem.
>>>>>
>>>>> Pulling the data out into the application and sending it back in
>>>>> won't
>>>>> improve things.  Exactly the same number of rows need to be visited,
>>>>> but the way you have it now you have to marshall all the data and
>>>>> ship
>>>>> it to the application too.  So it's automatically slower.  Indeed,
>>>>> making it slower might have masked your problem.
>>>>
>>>> Could be
>>>>
>>>>>
>>>>>> In the select/update case there is no sorting in the query; there
>>>>>> is an
>>>>>> offset/limit clause though, number of records retrieved are
>>>>>> currently
>>>>>> restricted to <10000 per query (out of 20 million in the active
>>>>>> subset).
>>>>>>
>>>>>>     SELECT id from queue where state = E'I' and job_id = <integer>
>>>>>> offset
>>>>>> <random 200..150000> limit <1-6000>
>>>>>
>>>>> This could be part of problem.  Are the different threads working on
>>>>> different job_ids, or is that the same job_id?  If you don't SORT
>>>>> that
>>>>
>>>> Same job_id, at least in the current system.
>>>>
>>>>> query before the OFFSET, then the rows will come back in whatever
>>>>> order the system likes.
>>>>
>>>> I suspect that a sort operation on (currently) 20+ million rows for
>>>> every query for just 6000 (previous version was 1500 entries) would
>>>> cause quite a bit more slowness than breaking up the query in two
>>>> operations, or the risk of collisions would, because each process
>>>> would have to load all that information (even if it is cached).
>>>>
>>>>>> However, in the UPDATE case, the looping processes are all UPDATE
>>>>>> queries,
>>>>>> no SELECTs involved.
>>>>>
>>>>> But you said it's all in the same transaction scope.  The lock is a
>>>>> the transaction scope.
>>>>
>>>> But the statement it locks/loops on are only UPDATE statements, also
>>>> in the processes that are waiting.
>>>>
>>>>> Anyway, what I'd do is try to cause the condition and post the
>>>>> pg_locks information.  When I've done this in the past, usually the
>>>>> best thing to do is also to have query logs on for everything (don't
>>>>> forget to log the pid!) so you can see what the other transaction
>>>>> you're waiting on touched already.  You can usually find the
>>>>> inversion
>>>>> that way.  Once you see it, it's always obvious what you've done, in
>>>>> my experience (and completely mystifying before that, unfortunately).
>>>>
>>>> Will take a look in a few days, probably midweek.
>>>>
>>>>
>>>
>>>
>>
>>
>
>


--
Sincerely,
Yngve N. Pettersen

Using Opera's mail client: http://www.opera.com/mail/