On Thu, 30 May 2002, Doug Fields wrote:
> I currently use 1.4 gigs for "shared mem" in my database (out of 2G) - I
> couldn't get PostgreSQL to run with more than that (it might be an OS
> limit, Linux 2.2).
Does it just fail to allocate? There may be a kernel parameter you
have to tweak.
I'm presuming that the idea with allocating lots of shared memory
is so that postgres can buffer data blocks from the disk. However,
since postgres uses the filesystem, the operating system will also
buffer disk data, using whatever memory is free to do so. Thus,
increaing the shared memory allocated to postgres will just reduce
the amount of memory available to the OS to do block buffering.
What is the advantage, if any, to having postgres do the buffering
in its shared memory rather than letting the OS do it?
One disadvantage I can think of is that when a back end (or several
back ends) allocates a lot of memory for sorting (assuming you let
them do that), you might end up pushing the sort memory out to your
swap disk, whereas if the OS is doing the buffer management, it
can just buffer fewer file blocks while you're doing the sort,
instead.
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC