On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up with total work_mem usage equal to shared_buffers.
We can try a different algorithm to scale up work_mem, but it seems wise to auto-scale it up to some extent based on shared_buffers.
In my experience a optimal value of work_mem depends on data and load, so I prefer a work_mem as independent parameter.
But it still is an independent parameter. I am just changing the default.
The danger with work_mem especially is that setting it too high can lead to crashing postgres or your system at some stage down the track, so autotuning it is kinda dangerous, much more dangerous than autotuning shared buffers.
Is this common to see? I ask because in my experience, having 100 connections all decide to do large sorts simultaneously is going to make the server fall over, regardless of whether it tries to do them in memory (OOM) or whether it does them with tape sorts (stuck spin locks, usually).
The assumption that each connection won't use lots of work_mem is also false, I think, especially in these days of connection poolers.
I don't follow that. Why would using a connection pooler change the multiples of work_mem that a connection would use?