Re: Query error: could not resize shared memory segment - Mailing list pgsql-general
From | Thuc Nguyen Canh |
---|---|
Subject | Re: Query error: could not resize shared memory segment |
Date | |
Msg-id | CAG903Prw62kca90Q-hcbffc+hM2HH3_ZTPao09+YyAe=eMcJmg@mail.gmail.com Whole thread Raw |
In response to | Re: Query error: could not resize shared memory segment (Thomas Munro <thomas.munro@enterprisedb.com>) |
Responses |
Re: Query error: could not resize shared memory segment
|
List | pgsql-general |
Hi,
Here is the query plan of a query that causes above issue for any random_page_cost < 3 (I keep the work_mem by default)
'Sort (cost=9441498.11..9542397.83 rows=40359886 width=64) (actual time=33586.588..33586.590 rows=4 loops=1)'
' Sort Key: (to_char(b.week, 'dd-mm-yyyy'::text))'
' Sort Method: quicksort Memory: 25kB'
' CTE sumorder'
' -> GroupAggregate (cost=763614.25..775248.11 rows=513746 width=16) (actual time=16587.507..17320.290 rows=4 loops=1)'
' Group Key: (date_trunc('month'::text, to_timestamp("order".order_time)))'
' -> Sort (cost=763614.25..764923.47 rows=523689 width=14) (actual time=16587.362..16913.230 rows=539089 loops=1)'
' Sort Key: (date_trunc('month'::text, to_timestamp("order".order_time)))'
' Sort Method: quicksort Memory: 47116kB'
' -> Bitmap Heap Scan on "order" (cost=12679.94..713868.12 rows=523689 width=14) (actual time=516.465..15675.517 rows=539089 loops=1)'
' Recheck Cond: ((service_id = ANY ('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double precision))'
' Heap Blocks: exact=242484'
' -> Bitmap Index Scan on order_service_id_order_time_idx (cost=0.00..12549.02 rows=523689 width=0) (actual time=425.697..425.697 rows=539089 loops=1)'
' Index Cond: ((service_id = ANY ('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double precision))'
' CTE badorder'
' -> Finalize GroupAggregate (cost=993588.49..995549.11 rows=15712 width=16) (actual time=16257.720..16263.183 rows=13 loops=1)'
' Group Key: (date_trunc('month'::text, to_timestamp(order_1.order_time)))'
' -> Gather Merge (cost=993588.49..995247.93 rows=13100 width=16) (actual time=16257.435..16263.107 rows=39 loops=1)'
' Workers Planned: 2'
' Workers Launched: 2'
' -> Partial GroupAggregate (cost=992588.46..992735.84 rows=6550 width=16) (actual time=16246.191..16251.348 rows=13 loops=3)'
' Group Key: (date_trunc('month'::text, to_timestamp(order_1.order_time)))'
' -> Sort (cost=992588.46..992604.84 rows=6550 width=14) (actual time=16245.767..16248.316 rows=3715 loops=3)'
' Sort Key: (date_trunc('month'::text, to_timestamp(order_1.order_time)))'
' Sort Method: quicksort Memory: 274kB'
' -> Parallel Seq Scan on "order" order_1 (cost=0.00..992173.28 rows=6550 width=14) (actual time=4.162..16230.174 rows=3715 loops=3)'
' Filter: ((order_time >= '1483203600'::double precision) AND (service_id = ANY ('{SGN-BIKE,SGN-POOL}'::text[])) AND (rating_by_user < 5) AND (rating_by_user > 0))'
' Rows Removed by Filter: 1801667'
' -> Merge Join (cost=60414.85..1271289.99 rows=40359886 width=64) (actual time=33586.471..33586.503 rows=4 loops=1)'
' Merge Cond: (b.week = s.week)'
' -> Sort (cost=1409.33..1448.61 rows=15712 width=16) (actual time=16263.259..16263.276 rows=13 loops=1)'
' Sort Key: b.week'
' Sort Method: quicksort Memory: 25kB'
' -> CTE Scan on badorder b (cost=0.00..314.24 rows=15712 width=16) (actual time=16257.737..16263.220 rows=13 loops=1)'
' -> Sort (cost=59005.52..60289.88 rows=513746 width=16) (actual time=17320.506..17320.509 rows=4 loops=1)'
' Sort Key: s.week'
' Sort Method: quicksort Memory: 25kB'
' -> CTE Scan on sumorder s (cost=0.00..10274.92 rows=513746 width=16) (actual time=16587.532..17320.352 rows=4 loops=1)'
'Planning time: 3.202 ms'
'Execution time: 33589.971 ms'
On Wed, Jan 3, 2018 at 11:13 AM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
On Wed, Jan 3, 2018 at 5:05 PM, Thuc Nguyen Canh
<thucnguyencanh@gmail.com> wrote:
> The dynamic_shared_memory_type is posix, the before and after values for
> work_mem are ~41MB and ~64MB.
> I'm using a Digital Ocean vps of 16RAM 8 Cores.
> For more information, I managed to reproduce this issue on a fresh vps after
> I changed the random_page_cost from 4.0 to 1.1. So that said, I did reduce
> the random_page_cost to 1.1, in order to optimize postgresql performance on
> SSD (DO uses SSD) and got this issue.
So you have 16GB of RAM and here we're failing to posix_fallocate()
50MB (actually we can't tell if it's the ftruncate() or
posix_fallocate() call that failed, but the latter seems more likely
since the former just creates a big hole in the underlying tmpfs
file). Can you share the query plan (EXPLAIN SELECT ...)?
pgsql-general by date: