Thread: query is taking longer time after a while

query is taking longer time after a while

From
tomrevam
Date:
I have a table with 5 million rows. 10 inserts and deletes are performed on
this table every second.
The table has indexes on the columns I use to query it, and the query is
returning about a 1000 rows. Initially the query takes a very short time
(order of miliseconds), after a few hours it takes hundreds of miliseconds,
and after a few days it can take more than 10 seconds. When this happens it
also blocks all other operations on the database and I see very long times
for all of them.

I thought this may be caused by the indexes not remaining in the memory, but
I increased the shared_buffers to 0.5 GB and this didn't seem to help.
--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661219.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
Brian Modra
Date:
2009/9/29 tomrevam <tomer@fabrix.tv>:
>
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very short time
> (order of miliseconds), after a few hours it takes hundreds of miliseconds,
> and after a few days it can take more than 10 seconds. When this happens it
> also blocks all other operations on the database and I see very long times
> for all of them.

When did you last do an analyse and re-create indexes?
Is the table UPDATEd to or DELETEd from, or just INSERTed ?
Is your DB auto vacuuming?

I found that if you delete a lot of rows, the index seems to still
have entries for those deleted rows, though it automatically skips
them, this "skipping of them" takes time.
Also, if the index was created when the table was smalleror the types
of values were quite different, then the parameters for the index will
be wrong for the current total data. So it will be inefficient.
Tables that are frequently UPDATEd become fragmented, and need VACUUM.

I have a table from which I deleted 1/3 of its contents (32 million
rows), and then if I did an index search that would have included
those deleted rows, it took an age. Hours in fact! So I had to first
run ANALYSE on the table, create a new index, then delete the old and
rename the new index.

> I thought this may be caused by the indexes not remaining in the memory, but
> I increased the shared_buffers to 0.5 GB and this didn't seem to help.
> --
> View this message in context:
http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661219.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

Re: query is taking longer time after a while

From
tomrevam
Date:


Brian Modra-2 wrote:
>
> When did you last do an analyse and re-create indexes?
> Is the table UPDATEd to or DELETEd from, or just INSERTed ?
> Is your DB auto vacuuming?
>

My DB is auto-vacuuming all the time. The specific table I'm talking about
gets vacuumed at least every 2 hours (usually a little more frequently than
that).
Deletes are happening on the table at about the same rate as inserts (there
are also some updates).
--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661758.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
Brian Modra
Date:
2009/9/29 tomrevam <tomer@fabrix.tv>:
>
>
>
> Brian Modra-2 wrote:
>>
>> When did you last do an analyse and re-create indexes?
>> Is the table UPDATEd to or DELETEd from, or just INSERTed ?
>> Is your DB auto vacuuming?
>>
>
> My DB is auto-vacuuming all the time. The specific table I'm talking about
> gets vacuumed at least every 2 hours (usually a little more frequently than
> that).
> Deletes are happening on the table at about the same rate as inserts (there
> are also some updates).

The index quite likely is in a poor state. You could try this:

analyse ....
create index ... (same parameters as existing index)
delete the old index.
rename the new index to the same name as the old one
repeat this for all indexes.

> --
> View this message in context:
http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661758.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

Re: query is taking longer time after a while

From
Sam Mason
Date:
On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
> 2009/9/29 tomrevam <tomer@fabrix.tv>:
> > My DB is auto-vacuuming all the time. The specific table I'm talking about
> > gets vacuumed at least every 2 hours (usually a little more frequently than
> > that).
> > Deletes are happening on the table at about the same rate as inserts (there
> > are also some updates).
>
> The index quite likely is in a poor state.

Really? Plain vacuum should allow things to reach a steady state after
a while, doing a large delete will put things out of kilter, but that
doesn't sound to be the case here.  Vacuum full can also cause things to
go amiss, but if it's just regular vacuums then things should be OK.

What do you get out of vacuum analyse verbose? for this table?

> You could try this:
>
> analyse ....
> create index ... (same parameters as existing index)
> delete the old index.
> rename the new index to the same name as the old one
> repeat this for all indexes.

Why not just do:

  REINDEX TABLE yourbigtable;

No need to worry about rebuilding foreign key constraints or anything
like that then.

--
  Sam  http://samason.me.uk/

Re: query is taking longer time after a while

From
Brian Modra
Date:
2009/9/29 Sam Mason <sam@samason.me.uk>:
> On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
>> 2009/9/29 tomrevam <tomer@fabrix.tv>:
>> > My DB is auto-vacuuming all the time. The specific table I'm talking about
>> > gets vacuumed at least every 2 hours (usually a little more frequently than
>> > that).
>> > Deletes are happening on the table at about the same rate as inserts (there
>> > are also some updates).
>>
>> The index quite likely is in a poor state.
>
> Really? Plain vacuum should allow things to reach a steady state after
> a while, doing a large delete will put things out of kilter, but that
> doesn't sound to be the case here.  Vacuum full can also cause things to
> go amiss, but if it's just regular vacuums then things should be OK.

If there are a lot of deletes, then likely the index parameters are
not the best.

ANALYSE yourtable;

Then, reindex (or create new index followed by drop index and rename -
if you want to leave the index online.

> What do you get out of vacuum analyse verbose? for this table?
>
>> You could try this:
>>
>> analyse ....
>> create index ... (same parameters as existing index)
>> delete the old index.
>> rename the new index to the same name as the old one
>> repeat this for all indexes.
>
> Why not just do:
>
>  REINDEX TABLE yourbigtable;
>
> No need to worry about rebuilding foreign key constraints or anything
> like that then.

Thats OK if the table can be taken offline. REINDEX locks the index
while in progress.

>
> --
>  Sam  http://samason.me.uk/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

Re: query is taking longer time after a while

From
Sam Mason
Date:
On Tue, Sep 29, 2009 at 03:13:49PM +0200, Brian Modra wrote:
> 2009/9/29 Sam Mason <sam@samason.me.uk>:
> > Plain vacuum should allow things to reach a steady state after
> > a while,
>
> If there are a lot of deletes, then likely the index parameters are
> not the best.

My interpretation of the OPs problem was that the inserts and deletes
were happening at similar rates.  Thus this won't be a problem.

> ANALYSE yourtable;
>
> Then, reindex (or create new index followed by drop index and rename -
> if you want to leave the index online.

Analyse is just about collecting statistics for the planner, I'm not
sure why you'd want to run it before a reindex.  Autovacuum was being
run, so it's not going to make much difference is it?

> > Why not just do:
> >
> >   REINDEX TABLE yourbigtable;
>
> Thats OK if the table can be taken offline. REINDEX locks the index
> while in progress.

Good point, forgot about that.

--
  Sam  http://samason.me.uk/

Re: query is taking longer time after a while

From
Bill Moran
Date:
In response to Sam Mason <sam@samason.me.uk>:

> On Tue, Sep 29, 2009 at 03:13:49PM +0200, Brian Modra wrote:
> > 2009/9/29 Sam Mason <sam@samason.me.uk>:
> > > Plain vacuum should allow things to reach a steady state after
> > > a while,
> >
> > If there are a lot of deletes, then likely the index parameters are
> > not the best.
>
> My interpretation of the OPs problem was that the inserts and deletes
> were happening at similar rates.  Thus this won't be a problem.

There has (over the last few years) been a lot of speculation from people
who think that indexes may suffer performance degradation under some
workloads.  I've yet to see any actual evidence.

So, for my part, if the OP could run an explain analyze on the query,
then reindex the tables involved, then another explain analyze for
comparison purposes, I'd be very interested to see the results.

> > ANALYSE yourtable;
> >
> > Then, reindex (or create new index followed by drop index and rename -
> > if you want to leave the index online.
>
> Analyse is just about collecting statistics for the planner, I'm not
> sure why you'd want to run it before a reindex.  Autovacuum was being
> run, so it's not going to make much difference is it?

Unless he's getting a crappy plan because the stats are getting out of
date.  That could be determined by explain; analyze; explain, which
would allow you to see if you're getting different plans.

The OP did mention that he's using autovac, which will take care of
both vacuum and analyze for him.  However, he didn't provide his
autovac config, and it happens at times that the defaults are not
aggressive enough to keep a table well-maintained.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: query is taking longer time after a while

From
Andy Colson
Date:
tomrevam wrote:
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very short time
> (order of miliseconds), after a few hours it takes hundreds of miliseconds,
> and after a few days it can take more than 10 seconds. When this happens it
> also blocks all other operations on the database and I see very long times
> for all of them.
>
> I thought this may be caused by the indexes not remaining in the memory, but
> I increased the shared_buffers to 0.5 GB and this didn't seem to help.

Is one client connection making all the inserts/deletes?

Are you holding a transaction open for long periods of time?

Can you post an explain analyze'es'es for (1) when its quick and (2)
when its slow?

( what is the plural of explain analyze? :-) )

-Andy

Re: query is taking longer time after a while

From
Tom Lane
Date:
Bill Moran <wmoran@potentialtech.com> writes:
>> My interpretation of the OPs problem was that the inserts and deletes
>> were happening at similar rates.  Thus this won't be a problem.

> There has (over the last few years) been a lot of speculation from people
> who think that indexes may suffer performance degradation under some
> workloads.  I've yet to see any actual evidence.

There is a known usage pattern that leads to bloated btree indexes:
if you load lots of data and then decimate it after awhile, using a rule
that matches some index's order.  For instance if you load many years'
worth of daily data and then delete all but the last entry for each
month, then an index on the date column will be left in a not-very-dense
state with only about 1/30th as many entries per page as it should
ideally have.  VACUUM does not attempt to merge partially-full index
pages so it can't fix this for you, and subsequent additions will
probably be going at the end of the index so there is no hope of the
underused pages getting repopulated.

The number of actual occurrences of this pattern in the field doesn't
seem to be very high though.

> The OP did mention that he's using autovac, which will take care of
> both vacuum and analyze for him.  However, he didn't provide his
> autovac config, and it happens at times that the defaults are not
> aggressive enough to keep a table well-maintained.

Agreed, that's definitely a risk.

            regards, tom lane

Re: query is taking longer time after a while

From
Vick Khera
Date:
On Tue, Sep 29, 2009 at 9:48 AM, Bill Moran <wmoran@potentialtech.com> wrote:
> There has (over the last few years) been a lot of speculation from people
> who think that indexes may suffer performance degradation under some
> workloads.  I've yet to see any actual evidence.
>

Just last week I reindexed a 70+ million row table and shaved about
50% of the bloated index pages on a two integer column index.  I
believe it hadn't been reindexed in about 6 months.

I regularly have to re-index for performance purposes.  My data lives
on most tables for about 6 months and is then deleted, or lives
forever and is updated frequently.

I've commented about this many times before.

Re: query is taking longer time after a while

From
Scott Marlowe
Date:
On Tue, Sep 29, 2009 at 5:23 AM, tomrevam <tomer@fabrix.tv> wrote:
>
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very short time
> (order of miliseconds), after a few hours it takes hundreds of miliseconds,
> and after a few days it can take more than 10 seconds. When this happens it
> also blocks all other operations on the database and I see very long times
> for all of them.
>
> I thought this may be caused by the indexes not remaining in the memory, but
> I increased the shared_buffers to 0.5 GB and this didn't seem to help.

Just wondering, what version of pgsql are you running?  I noticed a
lot less degradation from heavily updated tables when I went to 8.3
and set the fill % for tables / indexes to 90% or so.  If you're
running an older version, the upgrade to 8.3 may well be worth the
effort.

Re: query is taking longer time after a while

From
tomrevam
Date:


Bill Moran wrote:
>
> The OP did mention that he's using autovac, which will take care of
> both vacuum and analyze for him.  However, he didn't provide his
> autovac config, and it happens at times that the defaults are not
> aggressive enough to keep a table well-maintained.
>

Here are my autovac configuration parameters. I played around with them to
make autovac work frequently.

#autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 10
autovacuum_naptime = 10s
autovacuum_vacuum_threshold = 2000
autovacuum_analyze_threshold = 2000
autovacuum_vacuum_scale_factor = 0.0005
autovacuum_analyze_scale_factor = 0.0005
#autovacuum_freeze_max_age = 200000000
#autovacuum_vacuum_cost_delay = 20
--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25675635.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
tomrevam
Date:


Scott Marlowe-2 wrote:
>
> Just wondering, what version of pgsql are you running?  I noticed a
> lot less degradation from heavily updated tables when I went to 8.3
> and set the fill % for tables / indexes to 90% or so.  If you're
> running an older version, the upgrade to 8.3 may well be worth the
> effort.
>

I'm using version 8.3, but I don't know what you meant by fill %.

Thanks,
Tomer
--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25675694.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
"A. Kretschmer"
Date:
In response to tomrevam :
>
>
>
> Scott Marlowe-2 wrote:
> >
> > Just wondering, what version of pgsql are you running?  I noticed a
> > lot less degradation from heavily updated tables when I went to 8.3
> > and set the fill % for tables / indexes to 90% or so.  If you're
> > running an older version, the upgrade to 8.3 may well be worth the
> > effort.
> >
>
> I'm using version 8.3, but I don't know what you meant by fill %.

Copy&Paste from the doc:
( http://www.postgresql.org/docs/8.4/interactive/sql-createtable.html )


The fillfactor for a table is a percentage between 10 and 100. 100
(complete packing) is the default. When a smaller fillfactor is
specified, INSERT operations pack table pages only to the indicated
percentage; the remaining space on each page is reserved for updating
rows on that page. This gives UPDATE a chance to place the updated copy
of a row on the same page as the original, which is more efficient than
placing it on a different page. For a table whose entries are never
updated, complete packing is the best choice, but in heavily updated
tables smaller fillfactors are appropriate.

You can set this with this command:

alter table foo set (fillfactor=90);


Regards, Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)

Re: query is taking longer time after a while

From
Bill Moran
Date:
In response to tomrevam <tomer@fabrix.tv>:
>

My apologies, I should have asked for the output of VACUUM VERBOSE on
the problem table in conjunction with these settings.  (make sure you
do VACUUM VERBOSE when the table is exhibiting the speed problem)

>
> Bill Moran wrote:
> >
> > The OP did mention that he's using autovac, which will take care of
> > both vacuum and analyze for him.  However, he didn't provide his
> > autovac config, and it happens at times that the defaults are not
> > aggressive enough to keep a table well-maintained.
> >
>
> Here are my autovac configuration parameters. I played around with them to
> make autovac work frequently.
>
> #autovacuum = on
> log_autovacuum_min_duration = 0
> autovacuum_max_workers = 10
> autovacuum_naptime = 10s
> autovacuum_vacuum_threshold = 2000
> autovacuum_analyze_threshold = 2000
> autovacuum_vacuum_scale_factor = 0.0005
> autovacuum_analyze_scale_factor = 0.0005
> #autovacuum_freeze_max_age = 200000000
> #autovacuum_vacuum_cost_delay = 20
> --
> View this message in context:
http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25675635.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: query is taking longer time after a while

From
Bill Moran
Date:
In response to Vick Khera <vivek@khera.org>:

> On Tue, Sep 29, 2009 at 9:48 AM, Bill Moran <wmoran@potentialtech.com> wrote:
> > There has (over the last few years) been a lot of speculation from people
> > who think that indexes may suffer performance degradation under some
> > workloads.  I've yet to see any actual evidence.
>
> Just last week I reindexed a 70+ million row table and shaved about
> 50% of the bloated index pages on a two integer column index.  I
> believe it hadn't been reindexed in about 6 months.

Right.  I've seen the same kind of thing with our Bacula databases.

> I regularly have to re-index for performance purposes.  My data lives
> on most tables for about 6 months and is then deleted, or lives
> forever and is updated frequently.

This is the part that I've yet to see, is any actual demonstration that
this makes a _performance_ difference.  The space saving is well known
and easy to demonstrate, but in my own tests, whether or not doing a
REINDEX has any appreciable performance improvement has never been clear.
It stands to reason that it will, but I've just never seen demonstrated.

It's quite possible that I simply missed the discussion thread on which
this was shown.

I'm not arguing with you.  I'm simply curious as to whether index bloat
could cause the magnitude of performance problem the OP is having, and I
don't have any personal experience with that degree of problem, and I've
not heard it from anyone else, either.  In my personal experience, REINDEX
seems to provide only a nominal improvement, but it's likely that I'm not
seeing the worst case that others are describing.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: query is taking longer time after a while

From
tomrevam
Date:


Andy Colson-2 wrote:
>
> Can you post an explain analyze'es'es for (1) when its quick and (2)
> when its slow?
>

Here are results:
1. Upon startup:
                                                                       QUERY
PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on session_allocation_info  (cost=99.42..11369.76
rows=3110 width=21) (actual time=1.107..2.144 rows=677 loops=1)
   Recheck Cond: (((status)::text = 'active'::text) OR ((status)::text =
'setup'::text))
   Filter: ((initiator)::text = 'ISA'::text)
   ->  BitmapOr  (cost=99.42..99.42 rows=3111 width=0) (actual
time=0.426..0.426 rows=0 loops=1)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..48.93 rows=1555 width=0) (actual time=0.244..0.244 rows=1557
loops=1)
               Index Cond: ((status)::text = 'active'::text)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..48.93 rows=1555 width=0) (actual time=0.181..0.181 rows=1609
loops=1)
               Index Cond: ((status)::text = 'setup'::text)
 Total runtime: 2.193 ms
(9 rows)

Time: 2.602 ms


2. After a few hours:


QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on session_allocation_info  (cost=285.11..31149.80
rows=9317 width=21) (actual time=160.329..161.025 rows=677 loops=1)
   Recheck Cond: (((status)::text = 'active'::text) OR ((status)::text =
'setup'::text))
   Filter: ((initiator)::text = 'ISA'::text)
   ->  BitmapOr  (cost=285.11..285.11 rows=9322 width=0) (actual
time=47.171..47.171 rows=0 loops=1)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..140.23 rows=4661 width=0) (actual time=42.066..42.066 rows=28168
loops=1)
               Index Cond: ((status)::text = 'active'::text)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..140.23 rows=4661 width=0) (actual time=5.103..5.103 rows=20945
loops=1)
               Index Cond: ((status)::text = 'setup'::text)
 Total runtime: 161.079 ms
(9 rows)

Time: 162.009 ms


3. After a few days:


QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on session_allocation_info  (cost=10.55..14.57 rows=1
width=21) (actual time=4817.076..4819.918 rows=677 loops=1)
   Recheck Cond: (((status)::text = 'active'::text) OR ((status)::text =
'setup'::text))
   Filter: ((initiator)::text = 'ISA'::text)
   ->  BitmapOr  (cost=10.55..10.55 rows=1 width=0) (actual
time=2426.423..2426.423 rows=0 loops=1)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..5.28 rows=1 width=0) (actual time=1619.652..1619.652 rows=51025
loops=1)
               Index Cond: ((status)::text = 'active'::text)
         ->  Bitmap Index Scan on session_allocation_info_status_idx
(cost=0.00..5.28 rows=1 width=0) (actual time=806.770..806.770 rows=46601
loops=1)
               Index Cond: ((status)::text = 'setup'::text)
 Total runtime: 4819.990 ms

--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25736068.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
Sam Mason
Date:
On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>          ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=1619.652..1619.652rows=51025 loops=1) 
>                Index Cond: ((status)::text = 'active'::text)
>          ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=806.770..806.770rows=46601 loops=1) 
>                Index Cond: ((status)::text = 'setup'::text)
>  Total runtime: 4819.990 ms

Wow, that's quite a change in run time!  Are you sure planner stats are
being kept up to date?  It's expecting a *single* row back from an index
scan of "session_allocation_info_status_idx" when looking for "active"
and a single row when looking for "setup" but gets 51025 and 46601 rows
back respectively.  These are a *long* way out and would explain why it's
taking an inordinate amount of time.

If you try running "analyze session_allocation_info" and then seeing
what changes it would be interesting.  I'd try to get the "rows=n"
numbers to be in the same order of magnitude in the estimates and in the
actual run time.  Improving stats targets helps in some situations, but
may not here:

  http://www.postgresql.org/docs/current/static/sql-altertable.html

Something like:

  alter table session_allocation_info alter status set statistics 200;

--
  Sam  http://samason.me.uk/

Re: query is taking longer time after a while

From
tomrevam
Date:


Bill Moran wrote:
>
> My apologies, I should have asked for the output of VACUUM VERBOSE on
> the problem table in conjunction with these settings.  (make sure you
> do VACUUM VERBOSE when the table is exhibiting the speed problem)
>

INFO:  vacuuming "public.session_allocation_info"
INFO:  scanned index "session_allocation_info_pkey" to remove 175234 row
versions
DETAIL:  CPU 0.69s/1.47u sec elapsed 70.77 sec.
INFO:  scanned index "session_allocation_info_request_time_idx" to remove
175234 row versions
DETAIL:  CPU 0.22s/0.55u sec elapsed 11.64 sec.
INFO:  scanned index "session_allocation_info_teardown_time_idx" to remove
175234 row versions
DETAIL:  CPU 0.05s/0.49u sec elapsed 3.81 sec.
INFO:  scanned index "session_allocation_info_initiator_idx" to remove
175234 row versions
DETAIL:  CPU 0.37s/0.74u sec elapsed 14.48 sec.
INFO:  scanned index "session_allocation_info_status_idx" to remove 175234
row versions
DETAIL:  CPU 0.08s/0.51u sec elapsed 11.83 sec.
INFO:  "session_allocation_info": removed 175234 row versions in 25596 pages
DETAIL:  CPU 0.52s/0.53u sec elapsed 23.85 sec.
INFO:  index "session_allocation_info_pkey" now contains 4662043 row
versions in 55781 pages
DETAIL:  65733 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "session_allocation_info_request_time_idx" now contains 4662043
row versions in 69879 pages
DETAIL:  65943 index row versions were removed.
17354 index pages have been deleted, 17074 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "session_allocation_info_teardown_time_idx" now contains
4662043 row versions in 21677 pages
DETAIL:  65943 index row versions were removed.
8793 index pages have been deleted, 8548 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "session_allocation_info_initiator_idx" now contains 4662043
row versions in 65869 pages
DETAIL:  175234 index row versions were removed.
16261 index pages have been deleted, 15624 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "session_allocation_info_status_idx" now contains 4662043 row
versions in 25463 pages
DETAIL:  175234 index row versions were removed.
9217 index pages have been deleted, 8583 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "session_allocation_info": found 38814 removable, 4662043
nonremovable row versions in 205325 pages
DETAIL:  1590 dead row versions cannot be removed yet.
There were 11978040 unused item pointers.
89189 pages contain useful free space.
0 pages are entirely empty.
CPU 3.27s/5.20u sec elapsed 237.90 sec.
INFO:  vacuuming "pg_toast.pg_toast_17383"
INFO:  index "pg_toast_17383_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_17383": found 0 removable, 0 nonremovable row versions in 0
pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages contain useful free space.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
Time: 238981.517 ms

--
View this message in context: http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25737471.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: query is taking longer time after a while

From
Tom Lane
Date:
Sam Mason <sam@samason.me.uk> writes:
> On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>> ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=1619.652..1619.652rows=51025 loops=1) 
>> Index Cond: ((status)::text = 'active'::text)
>> ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=806.770..806.770rows=46601 loops=1) 
>> Index Cond: ((status)::text = 'setup'::text)
>> Total runtime: 4819.990 ms

> Wow, that's quite a change in run time!  Are you sure planner stats are
> being kept up to date?

It's not the planner's fault.  Note that the parent BitmapHeapScan is
still returning the same number of rows.  This means that the increase
in number of matching index entries represents all dead rows.  IOW,
what we have here is table bloat caused by inadequate vacuuming.
I missed the start of the thread, but the questions to be asking
are about whether autovacuum is on, what its parameters are if so,
and what the manual vacuuming policy is if not.

            regards, tom lane

Re: query is taking longer time after a while

From
Tom Lane
Date:
tomrevam <tomer@fabrix.tv> writes:
> Bill Moran wrote:
>> My apologies, I should have asked for the output of VACUUM VERBOSE on
>> the problem table in conjunction with these settings.  (make sure you
>> do VACUUM VERBOSE when the table is exhibiting the speed problem)

> INFO:  "session_allocation_info": found 38814 removable, 4662043
> nonremovable row versions in 205325 pages
> DETAIL:  1590 dead row versions cannot be removed yet.
> There were 11978040 unused item pointers.
> 89189 pages contain useful free space.
> 0 pages are entirely empty.

That doesn't look bad on the surface --- only 1% dead rows --- but
the very large number of unused item pointers indicates that at some
point in the past the table had either a whole lot of dead rows, or
a much larger number of live rows than it does now.  I'm still going
to bet that the problem comes down to inadequate vacuuming.

            regards, tom lane

Re: query is taking longer time after a while

From
Sam Mason
Date:
On Sun, Oct 04, 2009 at 11:08:01AM -0400, Tom Lane wrote:
> Sam Mason <sam@samason.me.uk> writes:
> > On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
> >> ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=1619.652..1619.652rows=51025 loops=1) 
> >> Index Cond: ((status)::text = 'active'::text)
> >> ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..5.28 rows=1 width=0) (actual
time=806.770..806.770rows=46601 loops=1) 
> >> Index Cond: ((status)::text = 'setup'::text)
> >> Total runtime: 4819.990 ms
>
> > Wow, that's quite a change in run time!  Are you sure planner stats are
> > being kept up to date?
>
> It's not the planner's fault.  Note that the parent BitmapHeapScan is
> still returning the same number of rows.

Sorry, I chopped out too much context.  Here's the "early" run, the
estimated and real row counts look good to me:

On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>          ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..48.93 rows=1555 width=0) (actual
time=0.244..0.244rows=1557 loops=1) 
>                Index Cond: ((status)::text = 'active'::text)
>          ->  Bitmap Index Scan on session_allocation_info_status_idx (cost=0.00..48.93 rows=1555 width=0) (actual
time=0.181..0.181rows=1609 loops=1) 
>                Index Cond: ((status)::text = 'setup'::text)
>  Total runtime: 2.193 ms

Or did I missing something else?

--
  Sam  http://samason.me.uk/