Re: Delete query takes exorbitant amount of time - Mailing list pgsql-performance

From Stephan Szabo
Subject Re: Delete query takes exorbitant amount of time
Date
Msg-id 20050327070313.K29183@megazone.bigpanda.com
Whole thread Raw
In response to Re: Delete query takes exorbitant amount of time  (Karim Nassar <karim.nassar@acm.org>)
Responses Re: Delete query takes exorbitant amount of time  (Simon Riggs <simon@2ndquadrant.com>)
List pgsql-performance
On Sat, 26 Mar 2005, Karim Nassar wrote:

> On Sat, 2005-03-26 at 15:18 -0800, Stephan Szabo wrote:
> > On Sat, 26 Mar 2005, Karim Nassar wrote:
> >
> > > On Sat, 2005-03-26 at 07:55 -0800, Stephan Szabo wrote:
> > > > That seems like it should be okay, hmm, what does something like:
> > > >
> > > > PREPARE test(int) AS SELECT 1 from measurement where
> > > > id_int_sensor_meas_type = $1 FOR UPDATE;
> > > > EXPLAIN ANALYZE EXECUTE TEST(1);
> > > >
> > > > give you as the plan?
> > >
> > >                                                       QUERY PLAN
> > >
-----------------------------------------------------------------------------------------------------------------------
> > >  Seq Scan on measurement  (cost=0.00..164559.16 rows=509478 width=6)
> > >                           (actual time=11608.402..11608.402 rows=0 loops=1)
> > >                           Filter: (id_int_sensor_meas_type = $1)
> > >  Total runtime: 11608.441 ms
> > > (3 rows)
> >
> > Hmm, has measurement been analyzed recently?  You might want to see if
> > raising the statistics target on measurement.id_int_sensor_meas_type and
> > reanalyzing changes the estimated rows down from 500k.
>
> orfs=# ALTER TABLE measurement ALTER COLUMN id_int_sensor_meas_type SET STATISTICS 1000;
> ALTER TABLE
> orfs=# VACUUM FULL ANALYZE VERBOSE;
> <snip>
> INFO:  free space map: 52 relations, 13501 pages stored; 9760 total pages needed
> DETAIL:  Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared memory.
> VACUUM
> orfs=# PREPARE test(int) AS SELECT 1 from measurement where
> orfs-# id_int_sensor_meas_type = $1 FOR UPDATE;
> PREPARE
> orfs=# EXPLAIN ANALYZE EXECUTE TEST(1);
>                                                      QUERY PLAN
> ---------------------------------------------------------------------------------------------------------------------
>  Seq Scan on measurement  (cost=0.00..164559.16 rows=509478 width=6) (actual time=8948.452..8948.452 rows=0 loops=1)
>    Filter: (id_int_sensor_meas_type = $1)
>  Total runtime: 8948.494 ms
> (3 rows)
>
> orfs=# EXPLAIN ANALYZE EXECUTE TEST(1);
>                                                      QUERY PLAN
> ---------------------------------------------------------------------------------------------------------------------
>  Seq Scan on measurement  (cost=0.00..164559.16 rows=509478 width=6) (actual time=3956.616..3956.616 rows=0 loops=1)
>    Filter: (id_int_sensor_meas_type = $1)
>  Total runtime: 3956.662 ms
> (3 rows)
>
>
>
> Some improvement. Even better once it's cached. Row estimate didn't
> change. Is this the best I can expect? Is there any other optimizations
> I am missing?

I'm not sure, really. Running a seq scan for each removed row in the
referenced table doesn't seem like a particularly good plan in general
though, especially if the average number of rows being referenced isn't
on the order of 500k per value. I don't know what to look at next though.


pgsql-performance by date:

Previous
From: Karim Nassar
Date:
Subject: Re: Delete query takes exorbitant amount of time
Next
From: "Alexander Kirpa"
Date:
Subject: Re: How to improve db performance with $7K?