Thread: Out of memory on vacuum analyze

Out of memory on vacuum analyze

From
John Cole
Date:
I have a large table (~55 million rows) and I'm trying to create an index
and vacuum analyze it.  The index has now been created, but the vacuum
analyze is failing with the following error:

ERROR:  out of memory
DETAIL:  Failed on request of size 943718400.

I've played with several settings, but I'm not sure what I need to set to
get this to operate.  I'm running on a dual Quad core system with 4GB of
memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.

Maintenance_work_mem is 900MB
Max_stack_depth is 3MB
Shared_buffers is 900MB
Temp_buffers is 32MB
Work_mem is 16MB
Max_fsm_pages is 204800
Max_connections is 50

Any help would be greatly appreciated.

Thanks,

John Cole

--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.441 / Virus Database: 268.18.2/692 - Release Date: 2/18/2007
4:35 PM

This email and any files transmitted with it are confidential and intended solely for the use of the individual or
entityto whom they are addressed. If you have received this email in error please notify the sender. This message
containsconfidential information and is intended only for the individual named. If you are not the named addressee you
shouldnot disseminate, distribute or copy this e-mail. 

Re: Out of memory on vacuum analyze

From
Jeff Davis
Date:
On Mon, 2007-02-19 at 12:47 -0600, John Cole wrote:
> I have a large table (~55 million rows) and I'm trying to create an index
> and vacuum analyze it.  The index has now been created, but the vacuum
> analyze is failing with the following error:
>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 943718400.
>
> I've played with several settings, but I'm not sure what I need to set to
> get this to operate.  I'm running on a dual Quad core system with 4GB of
> memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.
>
> Maintenance_work_mem is 900MB
> Max_stack_depth is 3MB
> Shared_buffers is 900MB
> Temp_buffers is 32MB
> Work_mem is 16MB
> Max_fsm_pages is 204800
> Max_connections is 50
>

You told PostgreSQL that you have 900MB available for
maintenance_work_mem, but your OS is denying the request. Try *lowering*
that setting to something that your OS will allow. That seems like an
awfully high setting to me.

Regards,
    Jeff Davis


Re: Out of memory on vacuum analyze

From
Jim Nasby
Date:
On Feb 19, 2007, at 1:19 PM, Jeff Davis wrote:
> You told PostgreSQL that you have 900MB available for
> maintenance_work_mem, but your OS is denying the request. Try
> *lowering*
> that setting to something that your OS will allow. That seems like an
> awfully high setting to me.

900MB isn't that unreasonable if you're building indexes on a restore
or something similar. I have run into issues when trying to set it
much over 1G, though... on various OSes and platforms.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)



Re: Out of memory on vacuum analyze

From
Stefan Kaltenbrunner
Date:
Jim Nasby wrote:
> On Feb 19, 2007, at 1:19 PM, Jeff Davis wrote:
>> You told PostgreSQL that you have 900MB available for
>> maintenance_work_mem, but your OS is denying the request. Try *lowering*
>> that setting to something that your OS will allow. That seems like an
>> awfully high setting to me.
>
> 900MB isn't that unreasonable if you're building indexes on a restore or
> something similar. I have run into issues when trying to set it much
> over 1G, though... on various OSes and platforms.

versions before 8.2 have some issues(mostly reporting bogus errors) with
very large settings for maintenance_work_mem. 8.2 and up are behaving
more sanely but I don't think they can actually make anything better
with values in the GB range.
Have you actually measured a performance improvment going beyond
250-350MB(that seemed about to be the sweet spot last I tested) or so
for index creation and friends ?


Stefan

Re: Out of memory on vacuum analyze

From
Jim Nasby
Date:
On Feb 21, 2007, at 12:58 AM, Stefan Kaltenbrunner wrote:
> Have you actually measured a performance improvment going beyond
> 250-350MB(that seemed about to be the sweet spot last I tested) or so
> for index creation and friends ?

To be honest, no; I just set it high to play on the safe side. But I
have seen reports of large in-memory sorts actually being slower than
tape sorts in some cases, so I probably am leaving some performance
on the table.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)