Thread: FreeSpaceMap hashtable out of memory

FreeSpaceMap hashtable out of memory

From
"Maksim Likharev"
Date:
Hi,
Using PG under Cygwin we having following error message during INSERT
INTO
"FreeSpaceMap hashtable out of memory".

What does that mean?
And if for a moment step out of knowledge 'PG under Cygwin', what in
general
this message is about and more important how to fix it?

Thank you.


Re: FreeSpaceMap hashtable out of memory

From
Tom Lane
Date:
"Maksim Likharev" <mlikharev@aurigin.com> writes:
> Using PG under Cygwin we having following error message during INSERT
> INTO
> "FreeSpaceMap hashtable out of memory".

Hm, that's not supposed to happen.  Can you create a reproducible
example?
        regards, tom lane


Re: FreeSpaceMap hashtable out of memory

From
"Maksim Likharev"
Date:
It is problematic to produce small enough subset, due to large DB and
randomness of the situation.
But here is what we see in server log file, see below:

It seems like
WARNING:  ShmemAlloc: out of memory
ERROR:    FreeSpaceMap hashtable out of memory

goes together, does it related to the size of Shared Memory or could
increase of
Shared Memory solve the thing or that just a coincidence?

Another thing ( from the code) it seems that
hash_search trying to inset new entry into DynHash and cannot allocate
memory
in DynHash Context, so another question is DynHash Context upper
limited, another word
cannot grow more than 8 * 1024 * 1024 byte?

Any suggestions are welcome,
Thank you.


------------------- LOG
----------------------------------------------------
LOG:  all server processes terminated; reinitializing shared memory and
semaphores
IpcMemoryCreate: shmget(key=5432001, size=4669440, 03600) failed: Not
enough
core

This error usually means that PostgreSQL's request for a shared
memory segment exceeded available memory or swap space.
To reduce the request size (currently 4669440 bytes), reduce
PostgreSQL's shared_buffers parameter (currently 256) and/or
its max_connections parameter (currently 128).

The PostgreSQL Administrator's Guide contains more information about
shared memory configuration.


Error 2:
WARNING:  ShmemAlloc: out of memory
ERROR:  FreeSpaceMap hashtable out of memory



-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, October 01, 2003 2:51 PM
To: Maksim Likharev
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] FreeSpaceMap hashtable out of memory


"Maksim Likharev" <mlikharev@aurigin.com> writes:
> Using PG under Cygwin we having following error message during INSERT
> INTO
> "FreeSpaceMap hashtable out of memory".

Hm, that's not supposed to happen.  Can you create a reproducible
example?
        regards, tom lane


Re: FreeSpaceMap hashtable out of memory

From
Tom Lane
Date:
"Maksim Likharev" <mlikharev@aurigin.com> writes:
> It seems like 
> WARNING:  ShmemAlloc: out of memory
> ERROR:    FreeSpaceMap hashtable out of memory
> goes together, does it related to the size of Shared Memory

Yeah, the FSM hashtable is in shared memory, so your problem is that
you're running out of shared memory.  This is not necessarily the fault
of the FSM as such though; it could be that some other shared data
structure is growing bigger than it was expected to.

Thinking about it, I'm fairly certain that the FSM can't grow larger
than the bounds you set for it, and so the problem is presumably
elsewhere.  The most likely bet is that the lock table is getting larger
than expected.  There is a control knob for the estimated size of the
lock table (max_locks_per_transaction), so if that's where the problem
is, it's easy to fix.  You should try to find out if that's the issue
though.  When this happens, are there a very large number of entries in
the pg_locks view?
        regards, tom lane