* Konstantin Malanchev <> [2019-07-09 11:51]:
> Hello,
>
> I'm running PostgreSQL 11.4 on Linux 4.12.14 and I see the following issue while executing single one query:
> ERROR: could not resize shared
> memory segment "/PostgreSQL.1596105766" to 536870912 bytes: No space left on device
>
> In my postgresql.conf I set sharred_buffers=256MB, I see that it is applied:
> SHOW shared_buffers;
> shared_buffers
> ----------------
> 256MB
>
> At the same time during the query execution, I see a lot of files in /dev/shm with the total size more than 256MB
>
> ls -lh /dev/shm
>
> How can I configure limit for total shared memory size?
The limit is mostly set by the memory, as /dev/shm
is like virtual memory or RAM disk.
Increase the RAM.
Jean
I have 8 GB RAM and /dev/shm size is 4GB, and there is no significant memory usage by other system processes. I surprised that Postgres uses more space in /dev/shm than sharred_buffers parameter allows, probably I don't understand what this parameter means.
I have no opportunity to enlarge total RAM and probably this query requires too much RAM to execute. Should Postgres just use HDD as temporary storage in this case?
Konstantin
On 9 Jul 2019, at 12:53, Jean Louis <> wrote:
* Konstantin Malanchev <> [2019-07-09 11:51]:
Hello,
I'm running PostgreSQL 11.4 on Linux 4.12.14 and I see the following issue while executing single one query: ERROR: could not resize shared memory segment "/PostgreSQL.1596105766" to 536870912 bytes: No space left on device
In my postgresql.conf I set sharred_buffers=256MB, I see that it is applied: SHOW shared_buffers; shared_buffers ---------------- 256MB
At the same time during the query execution, I see a lot of files in /dev/shm with the total size more than 256MB
ls -lh /dev/shm
How can I configure limit for total shared memory size?
The limit is mostly set by the memory, as /dev/shm is like virtual memory or RAM disk.
* Konstantin Malanchev <> [2019-07-09 12:10]:
> Hello Jean,
>
> I have 8 GB RAM and /dev/shm size is 4GB, and there is no significant memory usage by other system processes. I
surprisedthat Postgres uses more space in /dev/shm than sharred_buffers parameter allows, probably I don't understand
whatthis parameter means.
>
> I have no opportunity to enlarge total RAM and probably this query requires too much RAM to execute. Should Postgres
justuse HDD as temporary storage in this case?
That I cannot know. I know that /dev/shm could
grow as much as available free RAM.
Jean
On Tue, Jul 9, 2019 at 10:15 PM Jean Louis <> wrote:
> * Konstantin Malanchev <> [2019-07-09 12:10]:
> > I have 8 GB RAM and /dev/shm size is 4GB, and there is no significant memory usage by other system processes. I
surprisedthat Postgres uses more space in /dev/shm than sharred_buffers parameter allows, probably I don't understand
whatthis parameter means.
> >
> > I have no opportunity to enlarge total RAM and probably this query requires too much RAM to execute. Should
Postgresjust use HDD as temporary storage in this case?
>
> That I cannot know. I know that /dev/shm could
> grow as much as available free RAM.
Hi,
PostgreSQL creates segments in /dev/shm for parallel queries (via
shm_open()), not for shared buffers. The amount used is controlled by
work_mem. Queries can use up to work_mem for each node you see in the
EXPLAIN plan, and for each process, so it can be quite a lot if you
have lots of parallel worker processes and/or lots of
tables/partitions being sorted or hashed in your query.
--
Thomas Munro
https://enterprisedb.com
Hello Thomas,
Thank you for explanation. work_mem = 512MB and max_parallel_workers_per_gather = 2 and I run only one Postgres
instanceand only one query. EXPLAIN shows "Workers Planned: 2" for this query. Why it can use more than 1GB of
/dev/shm?
Konstantin
> On 9 Jul 2019, at 13:51, Thomas Munro <> wrote:
>
> On Tue, Jul 9, 2019 at 10:15 PM Jean Louis <> wrote:
>> * Konstantin Malanchev <> [2019-07-09 12:10]:
>>> I have 8 GB RAM and /dev/shm size is 4GB, and there is no significant memory usage by other system processes. I
surprisedthat Postgres uses more space in /dev/shm than sharred_buffers parameter allows, probably I don't understand
whatthis parameter means.
>>>
>>> I have no opportunity to enlarge total RAM and probably this query requires too much RAM to execute. Should
Postgresjust use HDD as temporary storage in this case?
>>
>> That I cannot know. I know that /dev/shm could
>> grow as much as available free RAM.
>
> Hi,
>
> PostgreSQL creates segments in /dev/shm for parallel queries (via
> shm_open()), not for shared buffers. The amount used is controlled by
> work_mem. Queries can use up to work_mem for each node you see in the
> EXPLAIN plan, and for each process, so it can be quite a lot if you
> have lots of parallel worker processes and/or lots of
> tables/partitions being sorted or hashed in your query.
>
> --
> Thomas Munro
> https://enterprisedb.com
On Tue, Jul 9, 2019 at 11:11 PM Konstantin Malanchev <> wrote:
> Thank you for explanation. work_mem = 512MB and max_parallel_workers_per_gather = 2 and I run only one Postgres
instanceand only one query. EXPLAIN shows "Workers Planned: 2" for this query. Why it can use more than 1GB of
/dev/shm?
For example, if you have one Parallel Hash Join in your plan, it could
allocate up to 512MB * 3 of shared memory (3 = leader process + 2
workers). It sounds like you'll need to set work_mem smaller. If you
run EXPLAIN ANALYZE you'll see how much memory is used by individual
operations. Usually it's regular private anonymous memory, but for
Parallel Hash it's /dev/shm memory.
--
Thomas Munro
https://enterprisedb.com
Thank you!
> For example, if you have one Parallel Hash Join in your plan, it could
> allocate up to 512MB * 3 of shared memory (3 = leader process + 2
> workers).
I'm executing the query with smaller work_mem, it will take some time. But I still confused why it used all /dev/shm
(4GB)and fails with "no space left" error while work_mem = 512MB.
> If you
> run EXPLAIN ANALYZE you'll see how much memory is used by individual
> operations.
I cannot run EXPLAIN ANALYSE, because the query fails. This is explanation for the query:
EXPLAIN
CREATE MATERIALIZED VIEW IF NOT EXISTS new_mat_view
AS
SELECT * FROM my_view
INNER JOIN another_mat_view USING (oid)
ORDER BY oid, field_name;
Gather Merge (cost=5696039356565.87..10040767101103.24 rows=37237923518438 width=31)
Workers Planned: 2
-> Sort (cost=5696039355565.85..5742586759963.90 rows=18618961759219 width=31)
Sort Key: my_table.oid, my_table.field_name
-> Parallel Hash Join (cost=11030236131.39..255829470118.27 rows=18618961759219 width=31)
Hash Cond: (another_mat_view.oid = my_table.oid)
-> Parallel Seq Scan on another_mat_view (cost=0.00..652514.56 rows=31645556 width=8)
-> Parallel Hash (cost=636676233.38..636676233.38 rows=20353804801 width=31)
-> Parallel Seq Scan on my_table (cost=0.00..636676233.38 rows=20353804801 width=31)
Filter: (flag = '0000000000000000'::bit(16))
Konstantin
On Wed, Jul 10, 2019 at 12:27 AM Konstantin Malanchev <> wrote:
> I'm executing the query with smaller work_mem, it will take some time. But I still confused why it used all /dev/shm
(4GB)and fails with "no space left" error while work_mem = 512MB.
I think it could fail that way for two reasons: /dev/shm size limit
(mount option, which I think you are saying you have set to 4GB?), or
your system ran out of RAM +swap. The directly listing in your first
message only shows 1.4GB, not 4GB, so perhaps it's the second problem.
> -> Parallel Hash Join (cost=11030236131.39..255829470118.27 rows=18618961759219 width=31)
> Hash Cond: (another_mat_view.oid = my_table.oid)
> -> Parallel Seq Scan on another_mat_view (cost=0.00..652514.56 rows=31645556 width=8)
> -> Parallel Hash (cost=636676233.38..636676233.38 rows=20353804801 width=31)
> -> Parallel Seq Scan on my_table (cost=0.00..636676233.38 rows=20353804801 width=31)
> Filter: (flag = '0000000000000000'::bit(16))
It's strange that it's hashing the ~20B row table instead of the ~30M row table.
--
Thomas Munro
https://enterprisedb.com
> I think it could fail that way for two reasons: /dev/shm size limit
> (mount option, which I think you are saying you have set to 4GB?), or
> your system ran out of RAM +swap.
df /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
shm 4194304 351176 3843128 9% /dev/shm
mount | grep /dev/shm
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=4194304k)
I have no swap and 8GB of RAM, when there is no active queries only ~800MB of RAM is used. So I don't believe that it
isout of memory problem.
> The directly listing in your first
> message only shows 1.4GB, not 4GB, so perhaps it's the second problem.
I cannot catch the right moment with ls, but I've run bash for-loop that that logs "df /dev/shm" every minute and the
lastentry before fail shows that 89% of /dev/shm is used:
Filesystem 1K-blocks Used Available Use% Mounted on
shm 4194304 3732368 461936 89% /dev/shm
There is no other processes that use /dev/shm.
> It's strange that it's hashing the ~20B row table instead of the ~30M row table.
It could be a question for another mail thread =)
Konstantin
Your message is accepted. The reference number is .