Re: ERROR: too many dynamic shared memory segments - Mailing list pgsql-general

From dainius.b
Subject Re: ERROR: too many dynamic shared memory segments
Date
Msg-id 1582191439405-0.post@n3.nabble.com
Whole thread Raw
In response to Re: ERROR: too many dynamic shared memory segments  (Thomas Munro <thomas.munro@gmail.com>)
List pgsql-general
Hello,
I also get high amount of "too many dynamic shared memory segments" errors.
Upgraded Postgres version to 12.2, but that did not help.
Server has 64GB Ram/16 CPU. Postgres params:
        "max_connections":500,
        "shared_buffers":"16GB",
        "effective_cache_size":"48GB",
        "maintenance_work_mem":"2GB",
        "checkpoint_completion_target":0.9,
        "wal_buffers":"32MB",
        "default_statistics_target":500,
        "random_page_cost":1.1,
        "effective_io_concurrency":200,
        "work_mem":"20971kB",
        "min_wal_size":"2GB",
        "max_wal_size":"8GB",
        "max_worker_processes":8,
        "max_parallel_workers_per_gather":4,
        "max_parallel_workers":8,
        "log_statement":"none"

This error happens when executing many parallel queries that have quite
complex plan:

Limit  (cost=163243.91..163250.02 rows=51 width=464) (actual
time=1224.817..1224.834 rows=31 loops=1)
  ->  Gather Merge  (cost=163243.91..165477.68 rows=18656 width=464) (actual
time=1224.815..1254.031 rows=31 loops=1)
        Workers Planned: 4
        Workers Launched: 4
        ->  Sort  (cost=162243.85..162255.51 rows=4664 width=464) (actual
time=1214.032..1214.032 rows=6 loops=5)
              Sort Key: (...)
              Sort Method: quicksort  Memory: 30kB
              Worker 0:  Sort Method: quicksort  Memory: 28kB
              Worker 1:  Sort Method: quicksort  Memory: 27kB
              Worker 2:  Sort Method: quicksort  Memory: 27kB
              Worker 3:  Sort Method: quicksort  Memory: 27kB
              ->  Parallel Hash Semi Join  (cost=41604.51..162088.25
rows=4664 width=464) (actual time=409.437..1213.922 rows=6 loops=5)
                    Hash Cond:  (...)
                    ->  Parallel Hash Join  (cost=28073.57..148289.42
rows=17880 width=464) (actual time=234.973..1165.754 rows=36930 loops=5)
                          Hash Cond:  (...)
                          ->  Parallel Hash Left Join 
(cost=20732.39..140901.30 rows=17880 width=445) (actual
time=187.482..1083.629 rows=36930 loops=5)
                                Hash Cond:  (...)
                                ->  Parallel Hash Left Join 
(cost=14850.80..134972.78 rows=17880 width=435) (actual
time=148.107..1010.915 rows=36930 loops=5)
                                      Hash Cond:  (...)
                                      ->  Parallel Hash Left Join 
(cost=8969.21..129044.25 rows=17880 width=425) (actual time=110.696..938.602
rows=36930 loops=5)
                                            Hash Cond:  (...)
                                            ->  Nested Loop 
(cost=3087.61..123115.72 rows=17880 width=411) (actual time=70.827..861.142
rows=36930 loops=5)
                                                  ->  Nested Loop 
(cost=3087.19..104038.13 rows=38340 width=263) (actual time=70.742..621.262
rows=37073 loops=5)
                                                        ->  Parallel Bitmap
Heap Scan on (...)  (cost=3086.76..73462.00 rows=39271 width=167) (actual
time=70.653..358.576 rows=37103 loops=5)
                                                              Recheck Cond: 
(...)
                                                              Filter: (...)
                                                              Rows Removed
by Filter: 42915
                                                              Heap Blocks:
exact=17872
                                                              ->  Bitmap
Index Scan on  (...) (cost=0.00..3047.49 rows=378465 width=0) (actual
time=52.331..52.331 rows=400144 loops=1)
                                                                    Index
Cond:  (...)
                                                        ->  Index Scan using 
(...)  (cost=0.43..0.78 rows=1 width=96) (actual time=0.006..0.006 rows=1
loops=185514)
                                                              Index Cond: 
(...)
                                                              Filter:  (...)
                                                              Rows Removed
by Filter: 0
                                                  ->  Index Scan using 
(...)(cost=0.43..0.50 rows=1 width=152) (actual time=0.006..0.006 rows=1
loops=185367)
                                                        Index Cond:  (...)
                                                        Filter:  (...)
                                                        Rows Removed by
Filter: 0
                                            ->  Parallel Hash 
(cost=3675.71..3675.71 rows=176471 width=18) (actual time=38.590..38.590
rows=60000 loops=5)
                                                  Buckets: 524288  Batches:
1  Memory Usage: 20640kB
                                                  ->  Parallel Seq Scan on 
(...) (cost=0.00..3675.71 rows=176471 width=18) (actual time=0.020..11.378
rows=60000 loops=5)
                                      ->  Parallel Hash 
(cost=3675.71..3675.71 rows=176471 width=18) (actual time=36.769..36.770
rows=60000 loops=5)
                                            Buckets: 524288  Batches: 1 
Memory Usage: 20608kB
                                            ->  Parallel Seq Scan on  (...) 
(cost=0.00..3675.71 rows=176471 width=18) (actual time=0.018..11.665
rows=60000 loops=5)
                                ->  Parallel Hash  (cost=3675.71..3675.71
rows=176471 width=18) (actual time=38.415..38.415 rows=60000 loops=5)
                                      Buckets: 524288  Batches: 1  Memory
Usage: 20640kB
                                      ->  Parallel Seq Scan on  (...) 
(cost=0.00..3675.71 rows=176471 width=18) (actual time=0.021..11.781
rows=60000 loops=5)
                          ->  Parallel Hash  (cost=5619.97..5619.97
rows=137697 width=27) (actual time=46.665..46.665 rows=66096 loops=5)
                                Buckets: 524288  Batches: 1  Memory Usage:
24544kB
                                ->  Parallel Seq Scan on (...) 
(cost=0.00..5619.97 rows=137697 width=27) (actual time=0.024..16.629
rows=66096 loops=5)
                    ->  Parallel Hash  (cost=12679.65..12679.65 rows=68103
width=4) (actual time=28.674..28.674 rows=41176 loops=5)
                          Buckets: 262144  Batches: 1  Memory Usage: 10176kB
                          ->  Parallel Index Only Scan using (...) 
(cost=0.57..12679.65 rows=68103 width=4) (actual time=0.048..15.443
rows=41176 loops=5)
                                Index Cond: ()
                                Heap Fetches: 205881
Planning Time: 6.941 ms
Execution Time: 1254.251 ms

So to avoid getting these errors the only solution is to decrease work_mem
or turn off paralelism? Because I hoped that in such case it would take
longer time to complete the queries but instead I get high amount of queries
just crashing.


Thomas Munro-5 wrote
> On Fri, Jan 31, 2020 at 11:05 PM Nicola Contu <

> nicola.contu@

> > wrote:
>> Do you still recommend to increase max_conn?
> 
> Yes, as a workaround of last resort.  The best thing would be to
> figure out why you are hitting the segment limit, and see if there is
> something we could tune to fix that. If you EXPLAIN your queries, do
> you see plans that have a lot of "Gather" nodes in them, perhaps
> involving many partitions?  Or are you running a lot of parallel
> queries at the same time?  Or are you running queries that do very,
> very large parallel hash joins?  Or something else?





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pgsql-general by date:

Previous
From: "Bellrose, Brian"
Date:
Subject: RE: [EXTERNAL]: Re: pglogical install errors openSUSE Leap 42.1
Next
From: al
Date:
Subject: Trying to restore a PostgreSQL-9.6 database from an old complete dumpand/or a up-to-date just base directory and other rescued files