Thread: Cause: org.postgresql.util.PSQLException: ERROR: could not resizeshared memory segment "/PostgreSQL.1946998112" to 8388608 bytes: No spaceleft on device

Hello team,

 

I'm getting below error while accessing postgres11 database. Please suggest the solution for this issue.

Cause: org.postgresql.util.PSQLException: ERROR: could not resize shared memory segment "/PostgreSQL.1946998112" to 8388608 bytes: No space left on device

        at com.caucho.el.ArrayResolverExpr.invoke(ArrayResolverExpr.java:260)

 

Details from docker :

 

bash-4.4$ mount | grep /dev/shm

shm on /dev/shm type tmpfs (rw,context="system_u:object_r:container_file_t:s0:c127,c569",nosuid,nodev,noexec,relatime,size=65536k)

 

 

 

bash-4.4$ free && ipcs -l && echo "page size:" && getconf PAGE_SIZE

             total       used       free     shared    buffers     cached

Mem:      32779840   24612300    8167540          0         52   23735916

-/+ buffers/cache:     876332   31903508

Swap:      4063228      91136    3972092

 

------ Messages: Limits --------

max queues system wide = 16384

max size of message (bytes) = 8192

default max size of queue (bytes) = 16384

 

------ Shared Memory Limits --------

max number of segments = 4096

max seg size (kbytes) = 18014398509465599

max total shared memory (pages) = 18446744073692774399

min seg size (bytes) = 1

 

------ Semaphore Limits --------

max number of arrays = 128

max semaphores per array = 250

max semaphores system wide = 32000

max ops per semop call = 32

semaphore max value = 32767

 

page size:

4096

bash-4.4$

 

 

bash-4.4$ psql

psql (11.2)

Type "help" for help.

 

postgres=# show max_parallel_workers_per_gather;

max_parallel_workers_per_gather

---------------------------------

2

(1 row)

 

postgres=# show max_parallel_workers_per_gather;

max_parallel_workers_per_gather

---------------------------------

2

(1 row)

 

postgres=#

 

Thanks,

 

On Mon, Jun 3, 2019 at 5:56 AM Daulat Ram <Daulat.Ram@exponential.com> wrote:
> Cause: org.postgresql.util.PSQLException: ERROR: could not resize shared memory segment "/PostgreSQL.1946998112" to
8388608bytes: No space left on device
 

> shm on /dev/shm type tmpfs
(rw,context="system_u:object_r:container_file_t:s0:c127,c569",nosuid,nodev,noexec,relatime,size=65536k)

I don't use Docker myself but I believe you can either tell it to
expose the host's /dev/shm or you can start it with something like
--shm-size="4096m".

It's surprisingly difficult to know how much shared memory PostgreSQL
really needs, today.  Without parallel query, it's complicated but
roughly: you can use work_mem for each executor node, and that is (at
a first approximation) linked to how many joins there are in your
query.  With the advent of parallel query, it's multiplied by the
number of workers.  If any of the joins happen to be parallel hash
join, then the memory comes out of shared memory instead of private
memory.  It's not fundamentally different in terms of the amount
needed, but Docker users are forced to confront the question so they
can set --shm-size.  One argument is that you should set it sky-high,
since private memory has no such limit, and from a virtual memory
point of view it's all the same in the end, it's just memory.  The
difficulty with choosing a limit here is that --shm-size is a
system-wide limit, but PostgreSQL not only has no system-wide memory
budgeting, it doesn't even have a per-query memory budget.  It just
has this "per operation" (usually executor node) thing.  The total
amount of memory you need to run PostgreSQL queries is a function of
work_mem * number of concurrent queries you expect to run * number of
tables you expect them to join * number of parallel workers you expect
to run.  The amount of it that happens to be in /dev/shm on a Linux
system (rather than private memory) is controlled by what fraction of
your joins are parallel hash joins.  Making our memory limits better
is really hard.

-- 
Thomas Munro
https://enterprisedb.com