Thread: Docs about buffers and sortmem setting
Hi! I've seen on this list some calculations concerning buffers and sort_mem settings. Could you tell me if there is a document about such a calculation? I'd like to use nearly all of my RAM for postgres. Richard. -- "First they ignore you. Then they laugh at you. Then they fight you. Then you win." - Mohandas Gandhi.
On Thu, Nov 14, 2002 at 10:43:44AM +0100, Ryszard Lach wrote: > Hi! > > I've seen on this list some calculations concerning buffers and sort_mem > settings. Could you tell me if there is a document about such a > calculation? I'd like to use nearly all of my RAM for postgres. Probably that's not true. You'll likely cause swapping if you try to. The general rule of thumb is to try about 25% of physical memory for your buffer size. Some people like to increase from there, until swapping starts, and then back off; but there are arguments against doing this, given the efficiency of modern filesystem buffering. In practice, a buffer size in the tens of thousands is probably adequate. We actually have discovered long-term negative performance effects if the buffers are set too large. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
> The general rule of thumb is to try about 25% of physical memory for > your buffer size. Some people like to increase from there, until > swapping starts, and then back off; but there are arguments against > doing this, given the efficiency of modern filesystem buffering. How about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM machine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux user space can handle only 1 GB and the rest is for kernel buffer and cache.. Regards, Bjoern
On Thu, Nov 14, 2002 at 01:19:57PM +0100, Bjoern Metzdorf wrote: > How about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM > machine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux > user space can handle only 1 GB and the rest is for kernel buffer and > cache.. How big is your data set? If it's smaller than 800 MB, you're wasting the buffers anyway. The thing is that the OS will buffer what you read anyway, so depending on how large your buffers are and how much memory your filesystem is able to use for its buffersm, you may actually be storing twice in memory everything in the shared memory: once in the shared area, and another time in the filesystem buffer. On our 16 G Solaris (Ultra SPARC) boxes, we found that using a gig for shared buffers was actually worse than a slightly lower amount, under Sol 7. The filesystem buffering is too good, so even though the system call to the "filesystem" (which turns out to be just to memory, because of the buffer) has a measurable cost, the implementation of the shared-buffer handling is bad enough that it costs _more_ to manage large buffers. Smaller buffers seem not to face the difficulty. I haven't a clue why. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
Andrew Sullivan <andrew@libertyrms.info> writes: > On our 16 G Solaris (Ultra SPARC) boxes, we found that using a gig > for shared buffers was actually worse than a slightly lower amount, > under Sol 7. The filesystem buffering is too good, so even though > the system call to the "filesystem" (which turns out to be just to > memory, because of the buffer) has a measurable cost, the > implementation of the shared-buffer handling is bad enough that it > costs _more_ to manage large buffers. Smaller buffers seem not to > face the difficulty. I haven't a clue why. Well, part of the reason is that a lot of the data in shared_buffers has to be effectively duplicated in the kernel's I/O caches, because it's frequently accessed. So while I'd think the cost of fetching a page from the buffer pool is lower than from the OS' cache, increasing the size of the Postgres buffer pool effectively decreases the total amount of RAM available for caching. Cheers, Neil -- Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
On Thu, Nov 14, 2002 at 12:20:49PM -0500, Neil Conway wrote: > Well, part of the reason is that a lot of the data in shared_buffers > has to be effectively duplicated in the kernel's I/O caches, because > it's frequently accessed. So while I'd think the cost of fetching a > page from the buffer pool is lower than from the OS' cache, increasing > the size of the Postgres buffer pool effectively decreases the total > amount of RAM available for caching. Well, yes, but on a machine with 16 G and a data set < 16 G, that's not the issue. A 1G shared buffer is too big anyway, according to our experience: it's fast at the beginning, but performance degrades. I don't know why. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
On Thu, 14 Nov 2002, Bjoern Metzdorf wrote: > > The general rule of thumb is to try about 25% of physical memory for > > your buffer size. Some people like to increase from there, until > > swapping starts, and then back off; but there are arguments against > > doing this, given the efficiency of modern filesystem buffering. > > How about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM > machine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux > user space can handle only 1 GB and the rest is for kernel buffer and > cache.. Actually, I think the limit is 2 or 3 gig depending on how your kernel was compiled, but testing by folks on the list seems to show a maximum of under 2 gig. I'm a little fuzzy on it, you might wanna search the archives. I'm not sure if that was a linux or a postgresql problem, and it was reported several months back. Memory slowly fading.... :-)