On Fri, 31 May 2002, Tom Lane wrote:
> Curt Sampson <cjs@cynic.net> writes:
> > Next I tried to bump the sortmem up to 256 MB, half the physical RAM
> > of the machine. I found out the hard way that the back-end doing the
> > indexing will grow to something over three times the size of sortmem,
> > and proceeded (slowly) to extricate myself from swap hell.
>
> [ scratches head ... ] The sort code tries very hard to keep track
> of how much it's storing. I wonder why it's so far off? Perhaps
> failing to allow for palloc overhead? Remind me again of the exact
> datatype you were indexing, please.
It's just an int. The exact table definition is (int, date, int,
int). The last two are both random ints ranging from 0-99999, and
are the slow ones to index.
The working set of the backend appears to be most of what it's
allocating, too. I'd assumed that it might be allocating two or
three sorts, or something, maybe to merge the sort files.
> Looks like the 128Mb case was swapping a lot (note the page faults
> count). That probably explains the difference.
Oh, duh. I should read this stuff a bit more closely.
But I wonder why? The working set was well under the RAM size of
the machine. Perhaps it wasn't touching all the pages in its
working set often enough, and the I/O was driving the backend's
pages out of memory from time to time. It's a known problem with
NetBSD under certain circmustances.
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC