On Wed, Nov 11, 2015 at 6:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> I've committed most of this, except for some planner bits that I
> didn't like, and after a bunch of cleanup. Instead, I committed the
> consider-parallel-v2.patch with some additional planner bits to make
> up for the ones I removed from your patch. So, now we have parallel
> sequential scan!
Pretty cool. All I had to do is mark my slow plperl functions as
being parallel safe, and bang, parallel execution of them for seq
scans.
But, there does seem to be a memory leak.
The setup (warning: 20GB of data):
create table foobar as select md5(floor(random()*1500000)::text) as
id, random() as volume from generate_series(1,200000000);
set max_parallel_degree TO 8;
explain select count(*) from foobar where volume >0.9; QUERY PLAN
---------------------------------------------------------------------------------------Aggregate
(cost=2626202.44..2626202.45rows=1 width=0) -> Gather (cost=1000.00..2576381.76 rows=19928272 width=0) Number
ofWorkers: 7 -> Parallel Seq Scan on foobar (cost=0.00..582554.56
rows=19928272 width=0) Filter: (volume > '0.9'::double precision)
Now running this query leads to an OOM condition:
explain (analyze, buffers) select count(*) from foobar where volume >0.9;
WARNING: terminating connection because of crash of another server process
Running it without the explain also causes the problem.
Memory dump looks like at some point before the crash looks like:
TopMemoryContext: 62496 total in 9 blocks; 16976 free (60 chunks); 45520 used TopTransactionContext: 8192 total in 1
blocks;4024 free (8 chunks); 4168 used ExecutorState: 1795153920 total in 223 blocks; 4159872 free (880
chunks); 1790994048 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used Operator class cache: 8192
totalin 1 blocks; 1680 free (0 chunks); 6512 used ....other insignificant stuff...
I don't have enough RAM for each of 7 workers to use all that much more than 2GB
work_mem is 25MB, maintenance work_mem is 64MB
Cheers,
Jeff