Re: PATCH: decreasing memory needlessly consumed by array_agg - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: PATCH: decreasing memory needlessly consumed by array_agg
Date
Msg-id 5479FAEA.3070501@fuzzy.cz
Whole thread Raw
In response to PATCH: decreasing memory needlessly consumed by array_agg  (Tomas Vondra <tv@fuzzy.cz>)
Responses Re: PATCH: decreasing memory needlessly consumed by array_agg
List pgsql-hackers
Hi,

Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in this thread
(not handling some of the callers appropriately etc.).

The v2 of the patch does this:

* adds 'subcontext' flag to initArrayResult* methods

  If it's 'true' then a separate context is created for the
  ArrayBuildState instance, otherwise it's built within the parent
  context.

  Currently, only the array_agg_* functions pass 'subcontext=false' so
  that the array_agg() aggregate does not create separate context for
  each group. All other callers use 'true' and thus keep using the
  original implementation (this includes ARRAY_SUBLINK subplans, and
  various other places building array incrementally).

* adds 'release' flag to makeArrayResult

  This is mostly to make it consistent with makeArrayResultArr and
  makeArrayResultAny. All current callers use 'release=true'.

  Also, it seems that using 'subcontext=false' and then 'release=true'
  would be a bug. Maybe it would be appropriate to store the
  'subcontext' value into the ArrayBuildState and then throw an error
  if makeArrayResult* is called with (release=true && subcontext=false).

* modifies all the places calling those functions

* decreases number of preallocated elements to 8

  The original value was 64 (512B), the current value is 64B. (Not
  counting the 'nulls' array). More about this later ...


Now, some performance measurements - attached is a simple SQL script
that executes a few GROUP BY queries with various numbers of groups and
group elements. I ran the tests with two dataset sizes:

small
=====
a) 1M groups, 1 item per group
b) 100k groups, 16 items per group
c) 100k groups, 64 items per group
d) 10k groups, 1024 items per group

large
=====
a) 10M groups, 1 item per group
b) 1M groups, 16 items per group
c) 1M groups, 64 items per group
d) 100k groups, 1024 items per group

So essentially the 'large' dataset uses 10x the number of groups. The
results (average from the 5 runs, in ms) look like this:

small
=====

test | master | patched | diff
-----|--------|---------|-------
   a |   1419 |     834 | -41%
   b |    595 |     498 | -16%
   c |   2061 |    1832 | -11%
   d |   2197 |    1957 | -11%

large
=====

test | master | patched | diff
-----|--------|---------|-------
   a |    OOM |    9144 |  n/a
   b |   7366 |    6257 | -15%
   c |  29899 |   22940 | -23%
   d |  35456 |   31347 | -12%

So it seems to give solid speedup across the whole test suite - I'm yet
to find a case where it's actually slower than what we have now. The
test cases (b) and (c) were actually created with this goal, because
both should be OK with the original array size (64 elements), but with
the new size it requires a few repalloc() calls. But even those are much
faster.

This is most likely thanks to removing the AllocSetContextCreate call
and sharing freelists across groups (although the test cases don't seem
extremely suitable for that, as all the groups grow in parallel).

I even tried to bump the initial array size back to 64 elements, but the
performance actually decreased a bit for some reason. I have no idea why
this happens ...

The test script is attached - tweak the 'size' variable for different
dataset sizes. The (insane) work_mem sizes are used to force a hash
aggregate - clearly I don't have 1TB of RAM.

regards
Tomas

Attachment

pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: How about a option to disable autovacuum cancellation on lock conflict?
Next
From: Tom Lane
Date:
Subject: Re: [COMMITTERS] pgsql: Revert "Add libpq function PQhostaddr()."