Re: Large DB - Mailing list pgsql-general

From Manfred Koizar
Subject Re: Large DB
Date
Msg-id 1uon60lr3jjndh4o8i9cagd62tead9b0t6@email.aon.at
Whole thread Raw
In response to Re: Large DB  ("Mooney, Ryan" <ryan.mooney@pnl.gov>)
Responses Re: Large DB
Re: Large DB
List pgsql-general
On Wed, 31 Mar 2004 10:08:26 -0800, "Mooney, Ryan" <ryan.mooney@pnl.gov>
wrote:
>I haven't run ANALYZE on this table in a while.  After about 50-60M rows
>it didn't seem to change the query plan at all and since there were
>never any deletes/updates it seemed like it wasn't making much/any
>difference (should have been no pages to reclaim).

Reclaiming pages is not the job of ANALYSE, VACUUM does this.

>  That may be an
>invalid assumption though.

Might be a valid assumption as well -- if you're lucky.  But do you want
to depend on luck?  Eg. 75% of the today's rows contain timestamps that
are greater than what the planner believes to be the maximum.

No VACCUM, no ANALYSE, no REINDEX.  This explains why the planner thinks
there are only 4M pages, which gives 640 bytes/row if there were 50M
rows at that time.  OTOH the EXPLAIN shows 290M rows for the seq scan.
Something doesn't fit together here.

Hackers, what could update reltuples, but not relpages?

Or, Ryan, is it possible that you already had 290M rows when you ran
ANALYSE and you have more than 1G rows today?

BTW, ANALYSE is basically a constant time operation.

>Here is an explain on the query:
>
>=> explain select point, avg(pvalue) as avg from tp3 where host in

This tells us one half of the story.

    EXPLAIN ANALYSE SELECT ...

would tell us the other half, too.

Servus
 Manfred

pgsql-general by date:

Previous
From: mike
Date:
Subject: Problem restoring Database
Next
From: Christophe Musielak
Date:
Subject: 7.2.3-7.4.2 migration