Re: Out of memory on vacuum analyze - Mailing list pgsql-general

From Jeff Davis
Subject Re: Out of memory on vacuum analyze
Date
Msg-id 1171912781.10824.192.camel@dogma.v10.wvs
Whole thread Raw
In response to Out of memory on vacuum analyze  (John Cole <john.cole@uai.com>)
Responses Re: Out of memory on vacuum analyze  (Jim Nasby <decibel@decibel.org>)
List pgsql-general
On Mon, 2007-02-19 at 12:47 -0600, John Cole wrote:
> I have a large table (~55 million rows) and I'm trying to create an index
> and vacuum analyze it.  The index has now been created, but the vacuum
> analyze is failing with the following error:
>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 943718400.
>
> I've played with several settings, but I'm not sure what I need to set to
> get this to operate.  I'm running on a dual Quad core system with 4GB of
> memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.
>
> Maintenance_work_mem is 900MB
> Max_stack_depth is 3MB
> Shared_buffers is 900MB
> Temp_buffers is 32MB
> Work_mem is 16MB
> Max_fsm_pages is 204800
> Max_connections is 50
>

You told PostgreSQL that you have 900MB available for
maintenance_work_mem, but your OS is denying the request. Try *lowering*
that setting to something that your OS will allow. That seems like an
awfully high setting to me.

Regards,
    Jeff Davis


pgsql-general by date:

Previous
From: Bruno Wolff III
Date:
Subject: Re: Why *exactly* is date_trunc() not immutable ?
Next
From: Andrew Sullivan
Date:
Subject: Re: Database performance comparison paper.