Thread: vacuumdb ERROR: out of memory

vacuumdb ERROR: out of memory

From
David Kerr
Date:
I'm getting error:

When I try
vacuumdb -z assessment
or
vacuumdb assessment

I get:
vacuumdb: vacuuming of database "assessment" failed: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741820.

The only way i can actually analyze the DB is if i do a vacuumdb -f

The database is currently sitting at aproximatly 1/10th of my total data.

I'm on 8.3.5, SLES 11 Linux .

Any ideas?

Thanks

Dave

Re: vacuumdb ERROR: out of memory

From
Tom Lane
Date:
David Kerr <dmk@mr-paradox.net> writes:
> I'm getting error:
> When I try
> vacuumdb -z assessment
> or
> vacuumdb assessment

> I get:
> vacuumdb: vacuuming of database "assessment" failed: ERROR:  out of memory
> DETAIL:  Failed on request of size 1073741820.

What have you got maintenance_work_mem set to?

            regards, tom lane

Re: vacuumdb ERROR: out of memory

From
David Kerr
Date:
Tom Lane wrote:
> David Kerr <dmk@mr-paradox.net> writes:
>> I'm getting error:
>> When I try
>> vacuumdb -z assessment
>> or
>> vacuumdb assessment
>
>> I get:
>> vacuumdb: vacuuming of database "assessment" failed: ERROR:  out of memory
>> DETAIL:  Failed on request of size 1073741820.
>
> What have you got maintenance_work_mem set to?
>
>             regards, tom lane
>

maintenance_work_mem = 1GB

I don't know if it matters but my biggest relation is 7GB
(total including indexes is 16GB) with my total DB size being 20GB


Dave




Re: vacuumdb ERROR: out of memory

From
Tom Lane
Date:
David Kerr <dmk@mr-paradox.net> writes:
> Tom Lane wrote:
>> David Kerr <dmk@mr-paradox.net> writes:
>>> I get:
>>> vacuumdb: vacuuming of database "assessment" failed: ERROR:  out of memory
>>> DETAIL:  Failed on request of size 1073741820.
>>
>> What have you got maintenance_work_mem set to?

> maintenance_work_mem = 1GB

So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.

            regards, tom lane

Re: vacuumdb ERROR: out of memory

From
David Kerr
Date:
Tom Lane wrote:
> David Kerr <dmk@mr-paradox.net> writes:
>> Tom Lane wrote:
>>> David Kerr <dmk@mr-paradox.net> writes:
>>>> I get:
>>>> vacuumdb: vacuuming of database "assessment" failed: ERROR:  out of memory
>>>> DETAIL:  Failed on request of size 1073741820.
>>> What have you got maintenance_work_mem set to?
>
>> maintenance_work_mem = 1GB
>
> So evidently, when it tries to actually allocate 1GB, it can't do it.
> Ergo, that setting is too high for your machine.
>
>             regards, tom lane
>

AHhh, ok. I was thinking that it was filling up the 1GB i allocated to it.

I just dropped the memory allocated to the instance down by about 10GB
and i'm still getting the error though.

 > free
              total       used       free     shared    buffers     cached
Mem:      34997288   32821828    2175460          0     227420   32541844
-/+ buffers/cache:      52564   34944724
Swap:       530136         36     530100


seems like i've got 2GB free.

Food for thought..

I'll look more into it tomorrow morning though.

Thanks

Dave

Re: vacuumdb ERROR: out of memory

From
John R Pierce
Date:
David Kerr wrote:
>>> maintenance_work_mem = 1GB
>>
>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>> Ergo, that setting is too high for your machine.
>> ...
>
> seems like i've got 2GB free.


is this a 64bit postgres build?


if not, you're probably running out of virtual address space in the 32
bit user space, which is limited to like 2gb.

the other possibility, and here I'm not sure, is that
maintenance_work_mem is coming out of shared memory, and if so, you've
exceeeded your SHMMAX kernel limit.



Re: vacuumdb ERROR: out of memory

From
Guillaume Lelarge
Date:
Le 09/02/2010 05:49, John R Pierce a écrit :
> David Kerr wrote:
>>>> maintenance_work_mem = 1GB
>>>
>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>> Ergo, that setting is too high for your machine.
>>> ...
>>
>> seems like i've got 2GB free.
>
>
> is this a 64bit postgres build?
>
> if not, you're probably running out of virtual address space in the 32
> bit user space, which is limited to like 2gb.
>

IIRC, the virtual address space in 32bit platforms is 4GB.

> the other possibility, and here I'm not sure, is that
> maintenance_work_mem is coming out of shared memory, and if so, you've
> exceeeded your SHMMAX kernel limit.
>

work_mem and maintenance_work_mem are not shared memory. AFAICT, David
need to check if the VACUUM works with a lower setting for
maintenance_work_mem. For example, 512MB could work.


--
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

Re: vacuumdb ERROR: out of memory

From
David Kerr
Date:
Guillaume Lelarge wrote:
> Le 09/02/2010 05:49, John R Pierce a écrit :
>> David Kerr wrote:
>>>>> maintenance_work_mem = 1GB
>>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>>> Ergo, that setting is too high for your machine.
>>>> ...
>>> seems like i've got 2GB free.
>>
>> is this a 64bit postgres build?
>>
>> if not, you're probably running out of virtual address space in the 32
>> bit user space, which is limited to like 2gb.
>>
>
> IIRC, the virtual address space in 32bit platforms is 4GB.

it is a 32bit box.

>> the other possibility, and here I'm not sure, is that
>> maintenance_work_mem is coming out of shared memory, and if so, you've
>> exceeeded your SHMMAX kernel limit.
>>
>
> work_mem and maintenance_work_mem are not shared memory. AFAICT, David
> need to check if the VACUUM works with a lower setting for
> maintenance_work_mem. For example, 512MB could work.
>
>

Yes, vacuum -z works with 512MB. so any idea what was causing it not to
work with 1GB?

Thanks

Dave

Re: vacuumdb ERROR: out of memory

From
Guillaume Lelarge
Date:
Le 09/02/2010 09:35, David Kerr a écrit :
> Guillaume Lelarge wrote:
>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>> David Kerr wrote:
>>>>>> maintenance_work_mem = 1GB
>>>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>>>> Ergo, that setting is too high for your machine.
>>>>> ...
>>>> seems like i've got 2GB free.
>>>
>>> is this a 64bit postgres build?
>>>
>>> if not, you're probably running out of virtual address space in the 32
>>> bit user space, which is limited to like 2gb.
>>>
>>
>> IIRC, the virtual address space in 32bit platforms is 4GB.
>
> it is a 32bit box.
>
>>> the other possibility, and here I'm not sure, is that
>>> maintenance_work_mem is coming out of shared memory, and if so, you've
>>> exceeeded your SHMMAX kernel limit.
>>>
>>
>> work_mem and maintenance_work_mem are not shared memory. AFAICT, David
>> need to check if the VACUUM works with a lower setting for
>> maintenance_work_mem. For example, 512MB could work.
>>
>>
>
> Yes, vacuum -z works with 512MB. so any idea what was causing it not to
> work with 1GB?
>

Tom already explained that. The process couldn't get the 1GB it was
allowed to use with this setting of maintenance_work_mem.


--
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

Re: vacuumdb ERROR: out of memory

From
David Kerr
Date:
Guillaume Lelarge wrote:
> Le 09/02/2010 09:35, David Kerr a écrit :
>> Guillaume Lelarge wrote:
>>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>>> David Kerr wrote:
>>>>>>> maintenance_work_mem = 1GB
>>>>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>>>>> Ergo, that setting is too high for your machine.
>>>>>> ...
>>>>> seems like i've got 2GB free.
>>>> is this a 64bit postgres build?
>>>>
>>>> if not, you're probably running out of virtual address space in the 32
>>>> bit user space, which is limited to like 2gb.
>>>>
>>> IIRC, the virtual address space in 32bit platforms is 4GB.
>> it is a 32bit box.
>>
>>>> the other possibility, and here I'm not sure, is that
>>>> maintenance_work_mem is coming out of shared memory, and if so, you've
>>>> exceeeded your SHMMAX kernel limit.
>>>>
>>> work_mem and maintenance_work_mem are not shared memory. AFAICT, David
>>> need to check if the VACUUM works with a lower setting for
>>> maintenance_work_mem. For example, 512MB could work.
>>>
>>>
>> Yes, vacuum -z works with 512MB. so any idea what was causing it not to
>> work with 1GB?
>>
>
> Tom already explained that. The process couldn't get the 1GB it was
> allowed to use with this setting of maintenance_work_mem.
>
>
Well, that made sense until I freed up a lot of memory on the box. I had
tried it again with 2GB of free memory available to me to use. My
ulimits are all unlimited. So i'm wondering if there's a kernel setting
I need, or something similar.

oh, hmm, my swap is 517Megs, that probably isn't helping. Usually swap
is 1.5/2x available memory, isn't it? (it is for most unix's and oracle,
but i'm not sure about PG and linux)

Thanks

Dave

Re: vacuumdb ERROR: out of memory

From
Magnus Hagander
Date:
On Tue, Feb 9, 2010 at 09:53, David Kerr <dmk@mr-paradox.net> wrote:
> Guillaume Lelarge wrote:
>>
>> Le 09/02/2010 09:35, David Kerr a écrit :
>>>
>>> Guillaume Lelarge wrote:
>>>>
>>>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>>>>
>>>>> David Kerr wrote:
>>>>>>>>
>>>>>>>> maintenance_work_mem = 1GB
>>>>>>>
>>>>>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>>>>>> Ergo, that setting is too high for your machine.
>>>>>>> ...
>>>>>>
>>>>>> seems like i've got 2GB free.
>>>>>
>>>>> is this a 64bit postgres build?
>>>>>
>>>>> if not, you're probably running out of virtual address space in the 32
>>>>> bit user space, which is limited to like 2gb.
>>>>>
>>>> IIRC, the virtual address space in 32bit platforms is 4GB.

IIRC, on Linux that will be a max of 3Gb available to userspace
processes. Certainly not 4Gb - but it could be 2.


>>> it is a 32bit box.
>>>
>>>>> the other possibility, and here I'm not sure, is that
>>>>> maintenance_work_mem is coming out of shared memory, and if so, you've
>>>>> exceeeded your SHMMAX kernel limit.
>>>>>
>>>> work_mem and maintenance_work_mem are not shared memory. AFAICT, David
>>>> need to check if the VACUUM works with a lower setting for
>>>> maintenance_work_mem. For example, 512MB could work.
>>>>
>>>>
>>> Yes, vacuum -z works with 512MB. so any idea what was causing it not to
>>> work with 1GB?
>>>
>>
>> Tom already explained that. The process couldn't get the 1GB it was
>> allowed to use with this setting of maintenance_work_mem.
>>
>>
> Well, that made sense until I freed up a lot of memory on the box. I had
> tried it again with 2GB of free memory available to me to use. My ulimits
> are all unlimited. So i'm wondering if there's a kernel setting I need, or
> something similar.

You may well be running out of *address space* rather than pure
memory. PostgreSQL is failing to allocate 1Gb of *continuous* memory.
Not just 1Gb of memory anywhere. Shared memory, for example, lives at
a fixed location already. There may be >1Gb free in the address space,
just not where you need it.

In general, when you are starting to talk about things like 1Gb
maintenance_work_mem, you should've switched to 64-bit a while ago :-)


> oh, hmm, my swap is 517Megs, that probably isn't helping. Usually swap is
> 1.5/2x available memory, isn't it? (it is for most unix's and oracle, but
> i'm not sure about PG and linux)

I don't think that affects this problem.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

Re: vacuumdb ERROR: out of memory

From
David Kerr
Date:
Magnus Hagander wrote:
> On Tue, Feb 9, 2010 at 09:53, David Kerr <dmk@mr-paradox.net> wrote:
>> Guillaume Lelarge wrote:
>>> Le 09/02/2010 09:35, David Kerr a écrit :
>>>> Guillaume Lelarge wrote:
>>>>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>>>>> David Kerr wrote:
>>>>>>>>> maintenance_work_mem = 1GB
>>>>>>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>>>>>>> Ergo, that setting is too high for your machine.
>>>>>>>> ...
>>>>>>> seems like i've got 2GB free.
>>>>>> is this a 64bit postgres build?
>>>>>>
>>>>>> if not, you're probably running out of virtual address space in the 32
>>>>>> bit user space, which is limited to like 2gb.
>>>>>>
>>>>> IIRC, the virtual address space in 32bit platforms is 4GB.
>
> IIRC, on Linux that will be a max of 3Gb available to userspace
> processes. Certainly not 4Gb - but it could be 2.
>
>
>>>> it is a 32bit box.
>>>>
>>>>>> the other possibility, and here I'm not sure, is that
>>>>>> maintenance_work_mem is coming out of shared memory, and if so, you've
>>>>>> exceeeded your SHMMAX kernel limit.
>>>>>>
>>>>> work_mem and maintenance_work_mem are not shared memory. AFAICT, David
>>>>> need to check if the VACUUM works with a lower setting for
>>>>> maintenance_work_mem. For example, 512MB could work.
>>>>>
>>>>>
>>>> Yes, vacuum -z works with 512MB. so any idea what was causing it not to
>>>> work with 1GB?
>>>>
>>> Tom already explained that. The process couldn't get the 1GB it was
>>> allowed to use with this setting of maintenance_work_mem.
>>>
>>>
>> Well, that made sense until I freed up a lot of memory on the box. I had
>> tried it again with 2GB of free memory available to me to use. My ulimits
>> are all unlimited. So i'm wondering if there's a kernel setting I need, or
>> something similar.
>
> You may well be running out of *address space* rather than pure
> memory. PostgreSQL is failing to allocate 1Gb of *continuous* memory.
> Not just 1Gb of memory anywhere. Shared memory, for example, lives at
> a fixed location already. There may be >1Gb free in the address space,
> just not where you need it.

Ok that makes sense, it never occurred to me that malloc would require
a contiguous chunk the full size of the allocation request.


> In general, when you are starting to talk about things like 1Gb
> maintenance_work_mem, you should've switched to 64-bit a while ago :-)

Yes, I know, I actually specced out the server as 64 bit but someone
messed up and i'm making due with what I have.

>> oh, hmm, my swap is 517Megs, that probably isn't helping. Usually swap is
>> 1.5/2x available memory, isn't it? (it is for most unix's and oracle, but
>> i'm not sure about PG and linux)
>
> I don't think that affects this problem.

ok.

Thanks

Dave


Re: vacuumdb ERROR: out of memory

From
John R Pierce
Date:
Guillaume Lelarge wrote:
>> is this a 64bit postgres build?
>>
>> if not, you're probably running out of virtual address space in the 32
>> bit user space, which is limited to like 2gb.
>>
>>
>
> IIRC, the virtual address space in 32bit platforms is 4GB.
>

it is, but within that 4gb, the kernel uses the top 1gb, so there's 3gb
left for user program space in each process.  shared memory, runtime
libraries, etc all hve to come out of this 3gb user space





Re: vacuumdb ERROR: out of memory

From
Scott Marlowe
Date:
On Tue, Feb 9, 2010 at 1:55 PM, John R Pierce <pierce@hogranch.com> wrote:
> Guillaume Lelarge wrote:
>>>
>>> is this a 64bit postgres build?
>>>
>>> if not, you're probably running out of virtual address space in the 32
>>> bit user space, which is limited to like 2gb.
>>>
>>>
>>
>> IIRC, the virtual address space in 32bit platforms is 4GB.
>>
>
> it is, but within that 4gb, the kernel uses the top 1gb, so there's 3gb left
> for user program space in each process.  shared memory, runtime libraries,
> etc all hve to come out of this 3gb user space

Note that any modern 32bit linux (oxymoron there, but anyway) can run
with PAE enabled and access far more than just 3GB of memory.
However, no single user space app can hit more than 2 or 3 (I forget
which) at a time.