>Rod Taylor writes
> > > > "scott.marlowe" <scott.marlowe@ihs.com> writes:
> > > > > any chance of having some kind of max_total_sort_mem setting
to
> keep
> > > > > machines out of swap storms, or would that be a nightmare to
> implement?
>
> > Someone asked for this in Copenhagen, and I said we can't see how to
do
> > it. The only idea I had as to give the first requestor 50% of the
> > total, then a second query 50% of the remaining memory. Is that
better
> > than what we have?
>
> Lets look at it from another direction. The goal isn't to set a
maximum
> memory amount, but to avoid swapping.
I very much like your high level thinking, though on balance, I
personally do want to control the maximum memory allocation. It seems to
me that in general, there are just too many possibilities for what you
might want to mix on the same system. Perhaps we should restate the goal
slightly as being "maximising performance, whilst minimizing the RISK of
swapping".
An alternate suggestion might be a max_instance_mem setting, from which
all other memory allocations by that postgresql server were derived.
That way, however the "black box" operates, you have a single,
well-defined control point that will allow you to be as generous as you
see fit, but no further. [There's probably a few views on the
instance/database etc thing... I'm happy with more than one control
point - the name is less relevant] You can always write a script to
calculate the setting of this as a percentage of physical memory if you
want to do this automatically.
The suggestion about using percentages as relative rather than absolute
memory allocation has definitely been used successfully in the past on
other software systems. ...not the half-again each time method, but
assigning memory as a percentage of whatever's allocated. That way you
can raise the limit without changing everything else.
Best regards, Simon Riggs