On Tue, Jun 17, 2003 at 05:38:36PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <jim@nasby.net> writes:
> > Of course I wasn't planning on sucking down a bunch of memory and
> > holding on to it. :)
>
> Sure. But when you're done with the big sort, just start a fresh
> session. I don't see that this is worth agonizing over.
In this case I could do that, but that's not always possible. It would
certainly wreck havoc with connection pooling, for example.
> > If sort_mem is over X size, then use only Y for pre-buffering (How much
> > does a large sort_mem help if you have to spill to disk?)
>
> It still helps quite a lot, because the average initial run length is
> (if I recall Knuth correctly) twice the working buffer size. I can't
> see a reason for cutting back usage once you've been forced to start
> spilling.
Only because of double/triple buffering. If having the memory around
helps the algorithm then it should be used, at least up to the point of
diminishing returns.
> The bigger problem with your discussion is the assumption that we can
> find out "if the OS is running low on free physical memory". That seems
> (a) unportable and (b) a moving target.
Well, there's other ways to do what I'm thinking of that don't rely on
getting a free memory number from the OS. For example, there could be a
'total_sort_mem' parameter that specifies the total amount of memory
that can be used for all sorts on the entire machine.
--
Jim C. Nasby (aka Decibel!) jim@nasby.net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"