Simon Riggs <simon@2ndquadrant.com> writes:
> 1. Earlier we had some results that showed that the heapsorts got slower
> when work_mem was higher and that concerns me most of all right now.
Fair enough, but that's completely independent of the merge algorithm.
(I don't think the Nyberg results necessarily apply to our situation
anyway, as we are not sorting arrays of integers, and hence the cache
effects are far weaker for us. I don't mind trying alternate sort
algorithms, but I'm not going to believe an improvement in advance of
direct evidence in our own environment.)
> 2. Improvement in the way we do overall memory allocation, so we would
> not have the problem of undersetting work_mem that we currently
> experience. If we solved this problem we would have faster sorts in
> *all* cases, not just extremely large ones. Dynamically setting work_mem
> higher when possible would be very useful.
I think this would be extremely dangerous, as it would encourage
processes to take more than their fair share of available resources.
Also, to the extent that you believe the problem is insufficient L2
cache, it seems increasing work_mem to many times the size of L2 will
always be counterproductive. (Certainly there is no value in increasing
work_mem until we are in a regime where it consistently improves
performance significantly, which it seems we aren't yet.)
regards, tom lane