On Thu, 2008-02-07 at 23:59 +0100, Martijn van Oosterhout wrote:
> On Thu, Feb 07, 2008 at 08:22:42PM +0100, Dawid Kuroczko wrote:
> > Noooow, I know work_mem is not "total per process limit", but
> > rather per sort/hash/etc operation. I know the scheme is a bit
> > sketchy, but I think this would allow more memory-greedy
> > operations to use memory, while taking in consideration that
> > they are not the only ones out there. And that these settings
> > would be more like hints than the actual limits.
>
> Given that we don't even control memory usage within a single process
> that accuratly, it seems a bit difficult to do it across the board. You
> just don't know when you start a query how much memory you're going to
> use...
I know systems that do manage memory well, so I have a different
perspective. It is a problem and we should look for solutions; there are
always many non-solutions out there.
We could, for example, allocate large query workspace out of a shared
memory pool. When we have finished with it we could return it to the
pool.
-- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com