On Fri, 2005-09-23 at 10:09 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > If not, I would propose that when we move from qsort to tapesort mode we
> > free the larger work_mem setting (if one exists) and allocate only a
> > lower, though still optimal setting for the tapesort. That way the
> > memory can be freed for use by other users or the OS while the tapesort
> > proceeds (which is usually quite a while...).
>
> On most platforms it's quite unlikely that any memory would actually get
> released back to the OS before transaction end, because the memory
> blocks belonging to the tuplesort context will be intermixed with blocks
> belonging to other contexts. So I think this is pretty pointless.
I take it you mean pointless because of the way the memory allocation
works, rather than because giving memory back isn't worthwhile ?
Surely the sort memory would be allocated in contiguous chunks? In some
cases we might be talking about more than a GB of memory, so it'd be
good to get that back ASAP. I'm speculating....
> (If you can't afford to have the sort using all of sort_mem, you've set
> sort_mem too large, anyway.)
Sort takes care to allocate only what it needs as starts up. All I'm
suggesting is to take the same care when the sort mode changes. If the
above argument held water then we would just allocate all the memory in
one lump at startup, "because we can afford to", so I don't buy that.
Since we know the predicted size of the sort set prior to starting the
sort node, could we not use that information to allocate memory
appropriately? i.e. if sort size is predicted to be more than twice the
size of work_mem, then just move straight to the external sort algorithm
and set the work_mem down at the lower limit?
That is, unless somebody has evidence that having a very large memory
has any performance benefit for external sorting?
Best Regards, Simon Riggs