Re: [PATCHES] Automatically setting work_mem - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: [PATCHES] Automatically setting work_mem
Date
Msg-id 1143289440.24487.1038.camel@localhost.localdomain
Whole thread Raw
In response to Re: [PATCHES] Automatically setting work_mem  (Simon Riggs <simon@2ndquadrant.com>)
Responses Re: [PATCHES] Automatically setting work_mem
List pgsql-hackers
On Wed, 2006-03-22 at 10:03 +0000, Simon Riggs wrote:

> Recent test results show that with a 512MB test sort we can reclaim
> 97% of memory during final merge with only a noise level (+2%)
> increase in overall elapsed time. (Thats just an example, your mileage
> may vary). So a large query would use and keep about 536MB memory
> rather than 1536MB.

Large performance test output, credit to Ayush Parashar, Greenplum.

We test a very common case for large sorts with high work_mem: High
work_mem significantly reduces the number of runs required, whereas high
work_mem significantly increases MaxTapes, so there will frequently be
the situation that Nruns << MaxTapes and this patch seeks to optimise
the final merge (only) for that case.

        elapsed        final merge    CPU for final merge
with patch    385 s         100.65 s    5.48s/71.05u s
w/o patch    377 s        84.73 s        4.79s/72.32u s

So looking at just the final merge in isolation we have a 19% increase
in elapsed time from a 97% reduction in memory usage (based upon the
assumption that reducing available slots by 97% will lead to an overall
97% reduction in memory usage from slots+tuples). This uses an earlier
result that the optimal merge buffer size for the final merge is 8 times
larger than the overall optimal merge buffer size of 32 blocks; altering
this ratio would bring down elapsed time at the cost of increasing
memory. Using too much memory could also impact overall elapsed time
when we have concurrent users, so the question is should we optimise
resources for the multi-user case or for the single user case? Where is
the right balance point?

Resource usage: (resource usage) multiplied by (time in use)
with patch:    147,000 MB.secs (512 MB fir 285s, then 15MB for 100s)
w/o patch:    189,000 MB.secs (512 MB for 377s)
so overall resource consumption reduced to 77% of current usage, or the
other way up 45% additional users on a throughput basis.

Increase in final merge time is likely due to increased I/O. If this
final merge were input to other nodes in a complex query we may not
consume the tuples at maximum speed, so the additional time might easily
be covered by other actions.

Non final merge test results were within 3% of each other; the patch
doesn't touch that aspect at all, so from that we can say that the test
results are reasonably useful comparison.

- - - -

With patch:

LOG:  switching to external sort with 1831 tapes: CPU 2.86s/1.96u sec
elapsed 7.58 sec\
LOG:  finished writing run 1 to tape 0: CPU 7.36s/27.67u sec elapsed
42.05 sec\
LOG:  finished writing run 2 to tape 1: CPU 12.55s/56.85u sec elapsed
79.78 sec\
LOG:  finished writing run 3 to tape 2: CPU 17.88s/86.42u sec elapsed
120.94 sec\
LOG:  finished writing run 4 to tape 3: CPU 23.06s/116.46u sec elapsed
159.06 sec\
LOG:  finished writing run 5 to tape 4: CPU 28.57s/146.25u sec elapsed
201.59 sec\
LOG:  finished writing run 6 to tape 5: CPU 33.76s/176.14u sec elapsed
239.87 sec\
LOG:  performsort starting: CPU 38.13s/200.71u sec elapsed 272.83 sec\
LOG:  finished writing run 7 to tape 6: CPU 38.23s/204.51u sec elapsed
276.76 sec\
LOG:  finished writing final run 8 to tape 7: CPU 38.50s/211.93u sec
elapsed 284.51 sec\
LOG:  shrinking resources to 3% (from 4194304 to 146686 slots): CPU
38.52s/211.93u sec elapsed 284.69 sec\
LOG:  performsort done (except 8-way final merge): CPU 38.53s/212.00u
sec elapsed 284.85 sec\
LOG:  final merge: tape 7 exhausted: CPU 42.70s/270.65u sec elapsed
368.06 sec\
LOG:  reassigning resources; each tape gets: +2619 slots, +6770980 mem:
CPU 42.70s/270.70u sec elapsed 368.12 sec\
LOG:  final merge: tape 2 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 3 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 5 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 0 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 6 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 1 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  final merge: tape 4 exhausted: CPU 43.68s/283.05u sec elapsed
385.00 sec\
LOG:  external sort ended, 293182 disk blocks used: CPU 44.01s/283.05u
sec elapsed 385.50 sec\

Without patch:

LOG:  switching to external sort with 1873 tapes: CPU 2.72s/2.03u sec
elapsed 7.07 sec\
LOG:  finished writing run 1 to tape 0: CPU 7.08s/28.42u sec elapsed
39.96 sec\
LOG:  finished writing run 2 to tape 1: CPU 12.10s/58.47u sec elapsed
79.37 sec\
LOG:  finished writing run 3 to tape 2: CPU 17.35s/89.39u sec elapsed
120.18 sec\
LOG:  finished writing run 4 to tape 3: CPU 22.50s/120.55u sec elapsed
161.24 sec\
LOG:  finished writing run 5 to tape 4: CPU 27.84s/151.41u sec elapsed
202.11 sec\
LOG:  finished writing run 6 to tape 5: CPU 33.15s/182.57u sec elapsed
243.34 sec\
LOG:  performsort starting: CPU 37.53s/208.36u sec elapsed 277.51 sec\
LOG:  finished writing run 7 to tape 6: CPU 37.63s/212.03u sec elapsed
281.33 sec\
LOG:  finished writing final run 8 to tape 7: CPU 37.87s/219.39u sec
elapsed 288.97 sec\
LOG:  performsort done (except 8-way final merge): CPU 38.23s/221.33u
sec elapsed 292.27 sec\
LOG:  external sort ended, 293182 disk blocks used: CPU 43.02s/293.65u
sec elapsed 377.00 sec\



pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Where does the time go?
Next
From: Peter Eisentraut
Date:
Subject: Re: Role incompatibilities