Re: performance problem - Mailing list pgsql-general

From Rick Gigger
Subject Re: performance problem
Date
Msg-id 01c601c3afab$0445d660$0700a8c0@trogdor
Whole thread Raw
In response to Point-in-time data recovery - v.7.4  (Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no>)
List pgsql-general
Ah, I didn't realize that you could just do an ANALYZE.  I thought there was
only VACUUM ANALYZE but that can't run inside of a transaction.

Thanks,

rg

----- Original Message -----
From: "Alvaro Herrera Munoz" <alvherre@dcc.uchile.cl>
To: "Rick Gigger" <rick@alpinenetworking.com>
Cc: "Mike Mascari" <mascarm@mascari.com>; "PgSQL General ML"
<pgsql-general@postgresql.org>
Sent: Thursday, November 20, 2003 2:06 PM
Subject: Re: [GENERAL] performance problem


On Thu, Nov 20, 2003 at 01:52:10PM -0700, Rick Gigger wrote:

> I worked around this by starting the transaction and inserting the 45,000
> rows and then killing it.  The I removed the index and readded it which
> apparently gathered some stats and since there were all of the dead tuples
> in there from the failed transaction it now decided that it should use the
> index.  I reran the script and this time it took 5 minutes again instead
of
> 1 1/2 hours.

Stats are not collected automatically.  You should run ANALYZE after
importing your data.  And it's probably faster to create the index after
the data is loaded, too.

--
Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)
Y una voz del caos me habló y me dijo
"Sonríe y sé feliz, podría ser peor".
Y sonreí. Y fui feliz.
Y fue peor.


pgsql-general by date:

Previous
From: Mike Mascari
Date:
Subject: Re: performance problem
Next
From: Tom Lane
Date:
Subject: Re: performance problem