Re: Performance problems deleting data - Mailing list pgsql-performance

From Tom Lane
Subject Re: Performance problems deleting data
Date
Msg-id 26742.1204565113@sss.pgh.pa.us
Whole thread Raw
In response to Performance problems deleting data  (Rafael Martinez <r.m.guerrero@usit.uio.no>)
Responses Re: Performance problems deleting data
List pgsql-performance
Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> manage=# EXPLAIN ANALYZE DELETE FROM module WHERE deviceid='7298';
>                         QUERY PLAN
> -------------------------------------------------------------------------
>  Nested Loop  (cost=0.00..14.63 rows=1 width=67) (actual
> time=2.365..2.365 rows=0 loops=1)
>    ->  Index Scan using alerthist_end_time_btree on alerthist
> (cost=0.00..10.65 rows=1 width=67) (actual time=2.363..2.363 rows=0 loops=1)
>          Index Cond: (end_time = 'infinity'::timestamp without time zone)
>          Filter: ((((eventtypeid)::text = 'moduleState'::text) OR
> ((eventtypeid)::text = 'linkState'::text)) AND (7298 = deviceid))
>    ->  Index Scan using module_deviceid_key on module  (cost=0.00..3.96
> rows=1 width=4) (never executed)
>          Index Cond: (deviceid = 7298)
>  Total runtime: 2.546 ms

>  Index Scan using module_deviceid_key on module  (cost=0.00..3.96 rows=1
> width=6) (actual time=0.060..0.061 rows=1 loops=1)
>    Index Cond: (deviceid = 7298)
>  Trigger for constraint $1: time=3.422 calls=1
>  Trigger for constraint $1: time=0.603 calls=1
>  Total runtime: 2462558.813 ms
> (13 rows)
> -------------------------------------------------------------------------

> Any ideas why it is taking 2462558.813 ms to finish when the total time
> for the deletion is 2.546 ms + 3.422 ms + 0.603ms?

That's just bizarre.  So far as I can see from the 8.1 EXPLAIN code,
the only place the extra time could be spent is in ExecutorStart,
ExecutorEnd, or the top level of ExecutorRun, none of which should
take any noticeable amount of runtime in a trivial query like this.

The only thing I can think of is that ExecutorStart would have been
where we'd acquire RowExclusiveLock on "module", while the previous
rule-generated query would only take AccessShareLock.  So if for
instance some other transaction had ShareLock (perhaps from CREATE
INDEX) and just sat a long time before committing, perhaps this
would be explainable.  I'm not too sure about that explanation
though because I think the parser should have already taken
RowExclusiveLock when it was doing parse analysis.

Is the problem repeatable?  Is the delay consistent?  What do
you see in pg_locks while it's delaying?  Also watch "vmstat 1"
output --- is it consuming CPU and/or I/O?

            regards, tom lane

pgsql-performance by date:

Previous
From: Greg Smith
Date:
Subject: Re: which is more important? freq of checkpoints or the duration of them?
Next
From: Ron Mayer
Date:
Subject: Re: How to choose a disc array for Postgresql?