Re: Thousands of schemas and ANALYZE goes out of memory - Mailing list pgsql-general

From Jeff Janes
Subject Re: Thousands of schemas and ANALYZE goes out of memory
Date
Msg-id CAMkU=1wLjAsmJNuB6ZObZmGHqi9jLbK6n1eSgnOc5J1-AUsvUA@mail.gmail.com
Whole thread Raw
In response to Re: Thousands of schemas and ANALYZE goes out of memory  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: Thousands of schemas and ANALYZE goes out of memory  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Tue, Oct 2, 2012 at 5:09 PM, Jeff Janes <jeff.janes@gmail.com> wrote:

> I don't know how the transactionality of analyze works.  I was
> surprised to find that I even could run it in an explicit transaction
> block, I thought it would behave like vacuum and create index
> concurrently in that regard.
>
> However, I think that that would not solve your problem.  When I run
> analyze on each of 220,000 tiny tables by name within one session
> (using autocommit, so each in a transaction), it does run about 4
> times faster than just doing a database-wide vacuum which covers those
> same tables.  (Maybe this is the lock/resource manager issue that has
> been fixed for 9.3?)

For the record, the culprit that causes "analyze;" of a database with
a large number of small objects to be quadratic in time is
"get_tabstat_entry" and it is not fixed for 9.3.

Cheers,

Jeff


pgsql-general by date:

Previous
From: Shaun Thomas
Date:
Subject: Re: Moving from Java 1.5 to Java 1.6
Next
From: Achilleas Mantzios
Date:
Subject: Re: Moving from Java 1.5 to Java 1.6