Thread: Starting PostgreSQL 8.0.4 with more memory [FreeBSD 6.0]
Hi, I'm looking for some help in regards to letting Posresql use more memory. It fails to start with this message: shmat(id=65536) failed: Cannot allocate shared bufers Max buffers I can start it with is 115200. Server has 4gig of RAM, I've adjuted MAXDSIZ to 2.5Gigs. Here is other kernel settings kern.ipc.shmall: 7000000 kern.ipc.shmseg: 8192 kern.ipc.shmmni: 8291 kern.ipc.shmmax: 2000000000 kern.ipc.semaem: 100000 kern.ipc.semvmx: 32767 kern.ipc.semusz: 332 kern.ipc.semume: 384 kern.ipc.semopm: 300 kern.ipc.semmsl: 300 kern.ipc.semmnu: 384 kern.ipc.semmns: 384 kern.ipc.semmni: 384 kern.ipc.semmap: 384 postgresql.conf: shared_buffers = 152000 # min 16, at least max_connections*2, 8KB each work_mem = 50000 # min 64, size in KB maintenance_work_mem = 40000 # min 1024, size in KB max_stack_depth = 6048 # min 100, size in KB max_fsm_pages = 2000000 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 28192 # min 100, ~50 bytes each and there is no limits on pgsql user. any help / ideas will be appreciated -- Vlad
Vlad <marchenko@gmail.com> writes: > I'm looking for some help in regards to letting Posresql use more > memory. 8.0 can't go past 2Gb of shared memory, and there is really no reason to try because its performance will get worse not better with more than about 50000 shared buffers. 8.1 will relax the 2Gb limit, but it's still far from clear that there's any point in it. The conventional wisdom is that you should leave most of memory free for kernel disk cache, not try to eat it all in shared buffers. I haven't seen any evidence that that's changed in 8.1. It might possibly make sense to use several Gb of shared buffers in a machine with 16Gb or more of RAM, but not in one with only 4Gb RAM. BTW, where did you get the idea that it was sensible to set work_mem higher than maintenance_work_mem? That's just nuts. See the pgsql-performance archives for past discussions of this topic. regards, tom lane
Tom, I understood your point on memory usage. Out of curiosity - 115200 buffers seems to be little less than 1 gig (I assume 1 buffer = 8k), so I could not get any closer to 2gigs anyways.... Is it practical experience that more than 50000 buggers actually hurts postgresql performance? Any ideas why? What about really big databases? > BTW, where did you get the idea that it was sensible to set work_mem > higher than maintenance_work_mem? That's just nuts. I was just playing with different settings to see if there is one I can adjust to get it started. > > See the pgsql-performance archives for past discussions of this topic. ok, thnx -- Vlad
On Sun, 2005-10-30 at 23:08 -0500, Tom Lane wrote: > Vlad <marchenko@gmail.com> writes: > > I'm looking for some help in regards to letting Posresql use more > > memory. > > 8.0 can't go past 2Gb of shared memory, and there is really no reason > to try because its performance will get worse not better with more than > about 50000 shared buffers. Unless you turn off the bgwriter, in which case going higher can still have benefit given the right circumstances. > 8.1 will relax the 2Gb limit, but it's still far from clear that there's > any point in it. The conventional wisdom is that you should leave most > of memory free for kernel disk cache, not try to eat it all in shared > buffers. I haven't seen any evidence that that's changed in 8.1. It > might possibly make sense to use several Gb of shared buffers in a > machine with 16Gb or more of RAM, but not in one with only 4Gb RAM. I'm not sure we have any good tests of that either way, do we? I'm not certain why we would trust OS cache any more than we could trust the shared buffers. But setting it too high would probably overuse backend memory for most variable query workloads. > BTW, where did you get the idea that it was sensible to set work_mem > higher than maintenance_work_mem? That's just nuts. Surely if you choose to favour query sort performance say over vacuum performance that is a reasonable design choice in some specific circumstances? Not the general case, agreed. There are no assumptions in the code that work_mem is always smaller. Tasks are assigned to use maintenance_work_mem when they are considered to be "maintenance" tasks. Best Regards, Simon Riggs
On Mon, Oct 31, 2005 at 12:16:59PM +0000, Simon Riggs wrote: > > 8.0 can't go past 2Gb of shared memory, and there is really no reason > > to try because its performance will get worse not better with more than > > about 50000 shared buffers. > > Unless you turn off the bgwriter, in which case going higher can still > have benefit given the right circumstances. Is there any particular reason to turn that off? You want dirty pages written out. Doing them asyncronously beforehand means you don't have to wait for it at commit time. It also allows the OS to schedule the blocks into a better write order. Anyway, the original writer didn't specify an architechure. If it is a 32bit one it is entirly possible that the memory map simply has no large contiguous space to map the shared memory. > > 8.1 will relax the 2Gb limit, but it's still far from clear that there's > > any point in it. The conventional wisdom is that you should leave most > > of memory free for kernel disk cache, not try to eat it all in shared > > buffers. I haven't seen any evidence that that's changed in 8.1. It > > might possibly make sense to use several Gb of shared buffers in a > > machine with 16Gb or more of RAM, but not in one with only 4Gb RAM. > > I'm not sure we have any good tests of that either way, do we? I'm not > certain why we would trust OS cache any more than we could trust the > shared buffers. But setting it too high would probably overuse backend > memory for most variable query workloads. Well, it comes down to a thought experiment. Any disk blocks you have in the shared buffers will also be in the system cache. If you give 4GB to shared buffers, then there will be 4GB of data in the system cache which is not directly useful. So it seems shared buffers should be large enough to hold all the info PostgreSQL needs at any particular moment, anything else is just wasteful. Getting data out of the system cache is not terribly expensive, I timed it at 50 microseconds per page on my oldish laptop. Secondly, you're assuming that PostgreSQLs caching is at least as efficient as the OS caching, which is more of an assertion than anything else. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
On Mon, 2005-10-31 at 14:14 +0100, Martijn van Oosterhout wrote: > On Mon, Oct 31, 2005 at 12:16:59PM +0000, Simon Riggs wrote: > > > 8.0 can't go past 2Gb of shared memory, and there is really no reason > > > to try because its performance will get worse not better with more than > > > about 50000 shared buffers. > > > > Unless you turn off the bgwriter, in which case going higher can still > > have benefit given the right circumstances. > > Is there any particular reason to turn that off? Well yeh. If things work faster without it, then off it goes - or at least parameter settings vastly altered. > You want dirty pages > written out. Doing them asyncronously beforehand means you don't have > to wait for it at commit time. It also allows the OS to schedule the > blocks into a better write order. Only assuming you have a constant heavy write workload. > > > 8.1 will relax the 2Gb limit, but it's still far from clear that there's > > > any point in it. The conventional wisdom is that you should leave most > > > of memory free for kernel disk cache, not try to eat it all in shared > > > buffers. I haven't seen any evidence that that's changed in 8.1. It > > > might possibly make sense to use several Gb of shared buffers in a > > > machine with 16Gb or more of RAM, but not in one with only 4Gb RAM. > > > > I'm not sure we have any good tests of that either way, do we? I'm not > > certain why we would trust OS cache any more than we could trust the > > shared buffers. But setting it too high would probably overuse backend > > memory for most variable query workloads. > > Well, it comes down to a thought experiment. Any disk blocks you have in > the shared buffers will also be in the system cache. Each have different and independent cache replacement... > If you give 4GB to > shared buffers, then there will be 4GB of data in the system cache which > is not directly useful. So it seems shared buffers should be large > enough to hold all the info PostgreSQL needs at any particular moment, > anything else is just wasteful. Getting data out of the system cache is > not terribly expensive, I timed it at 50 microseconds per page on my > oldish laptop. > > Secondly, you're assuming that PostgreSQLs caching is at least as > efficient as the OS caching, which is more of an assertion than > anything else. Do you doubt that? Why would shared_buffers be variable otherwise? Best Regards, Simon Riggs
> Anyway, the original writer didn't specify an architechure. If it is a > 32bit one it is entirly possible that the memory map simply has no > large contiguous space to map the shared memory. it's 32bit. The actual problem of giving more buffers to postgresql was solved with the help of the following post: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=83003+0+archive/2002/freebsd-hackers/20020804.freebsd-hackers It looks like despite to the comment in /usr/src/sys/i386/include/vmparam.h #ifndef MAXDSIZ #define MAXDSIZ (512UL*1024*1024) /* max data size */ #endif for FreeBSD MAXDSIZ actually tells kernel where to start allocating memory, but not the maximum allowable size. Cause as soon as I lowered this value from 2500UL*1024*1024(what I set when I was setting up the server) to 1024UL*1024*1025, I was able to further increase shared buffers in postgres.conf. Also, while I can agree with the point that "maybe OS file caching algorythm is more efficient than PostgreSQL's", but that still doest give us single meaning answer because: 1) for PostgreSQL the job of fetching the data from OS buffers should imply some overhead compared to accessing the data cached in shared buffers. 2) there is no guarantee that OS dedicates all the rest of available RAM for file caching. In fact, in case there are other processes running on the server, perhaps I want to make sure that that much memory is dedicated solely for PostgreSQL data caching, and the only way for that is increasing shared buffers. later today I will do some performance testing with shared buffers set to 50k as Tom suggested and then with, lets say 200k and post the results here. -- Vlad
Simon Riggs <simon@2ndquadrant.com> writes: > On Mon, 2005-10-31 at 14:14 +0100, Martijn van Oosterhout wrote: >> On Mon, Oct 31, 2005 at 12:16:59PM +0000, Simon Riggs wrote: >>> I'm not sure we have any good tests of that either way, do we? I'm not >>> certain why we would trust OS cache any more than we could trust the >>> shared buffers. But setting it too high would probably overuse backend >>> memory for most variable query workloads. >> >> Well, it comes down to a thought experiment. Any disk blocks you have in >> the shared buffers will also be in the system cache. > Each have different and independent cache replacement... The real point is that RAM dedicated to shared buffers can't be used for anything else [1], whereas letting the kernel manage it gives you some flexibility (for instance, to deal with transient large memory demands by individual backends, or from stuff unrelated to Postgres). A system configured to give most of RAM to shared buffers might look good on sufficiently narrow test cases, but its performance will be horribly brittle: it will go into swap thrashing on any small provocation. The extra 50usec or whatever to get stuff from a kernel disk buffer instead of our own shared buffer is a good tradeoff to get flexibility in the amount of stuff actually buffered at any one instant. [1] unless you are on a platform where the kernel doesn't think SysV shared memory should be locked in RAM. In that case, what you have is a large arena that is subject to being swapped out ... and a disk buffer that's been swapped to disk is demonstrably worse than no buffer at all. (Hint: count the I/Os involved, especially when the page is dirty.) regards, tom lane
On Mon, Oct 31, 2005 at 01:34:12PM +0000, Simon Riggs wrote: > > Secondly, you're assuming that PostgreSQLs caching is at least as > > efficient as the OS caching, which is more of an assertion than > > anything else. > > Do you doubt that? Why would shared_buffers be variable otherwise? Because the optimal hasn't been found and is probably different for each machine. There have been tests that demonstrate that you can raise the buffers to a certain point which is optimal and after that it just doesn't help [1]. They peg optimal size at 5-10% of memory. Also, as Tom pointed out, any memory assigned to shared buffers can't be used for sorts, temporary tables, plain old disk caching, trigger queues or anything else that isn't shared between backends. There are far more useful uses of memory than just buffering disk blocks. Have a nice day, [1] http://archives.postgresql.org/pgsql-performance/2004-10/msg00110.php -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
Martijn van Oosterhout <kleptog@svana.org> writes: > There have been tests that demonstrate that you can raise the buffers > to a certain point which is optimal and after that it just doesn't > help [1]. They peg optimal size at 5-10% of memory. > [1] http://archives.postgresql.org/pgsql-performance/2004-10/msg00110.php Note however that it's reasonable to think that 8.1 may do better than 8.0 did at performing well with large values of shared_buffers, primarily because we got rid of the StrategyDirtyBufferList overhead: http://archives.postgresql.org/pgsql-performance/2004-10/msg00112.php It'd be interesting to repeat the above-mentioned tests with 8.1. regards, tom lane
On Mon, Oct 31, 2005 at 09:54:39AM -0500, Tom Lane wrote: > Note however that it's reasonable to think that 8.1 may do better than > 8.0 did at performing well with large values of shared_buffers, > primarily because we got rid of the StrategyDirtyBufferList overhead: > http://archives.postgresql.org/pgsql-performance/2004-10/msg00112.php > > It'd be interesting to repeat the above-mentioned tests with 8.1. Well, OSDL has run tests on PostgreSQL as recently as 20050908 but the host with the results isn't responding to me, so no idea what the tests were. Also, they use various tests involving PostgreSQL to test the scalability of the Linux kernel, so you can see how postgres runs with various different kernel patches. http://www.osdl.org/lab_activities/kernel_testing/stp/search.lnk/search_test_requests Enter "postgresql" in the software field, or select one of the pgsql tests. Maybe someone else will have more luck than me getting the results... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
On Mon, 2005-10-31 at 15:44 +0100, Martijn van Oosterhout wrote: > On Mon, Oct 31, 2005 at 01:34:12PM +0000, Simon Riggs wrote: > > > Secondly, you're assuming that PostgreSQLs caching is at least as > > > efficient as the OS caching, which is more of an assertion than > > > anything else. > > > > Do you doubt that? Why would shared_buffers be variable otherwise? > > Because the optimal hasn't been found and is probably different for > each machine. > > There have been tests that demonstrate that you can raise the buffers > to a certain point which is optimal and after that it just doesn't > help [1]. They peg optimal size at 5-10% of memory. Please read the rest of that thread. Those results and their conclusions were refuted in some detail, which lead to a number of optimizations in 8.0 and 8.1, mostly written by Tom. > Also, as Tom pointed out, any memory assigned to shared buffers can't > be used for sorts, temporary tables, plain old disk caching, trigger > queues or anything else that isn't shared between backends. There are > far more useful uses of memory than just buffering disk blocks. Your point was about cache efficiency as an argument for not increasing shared_buffers. Politely, I don't accept that argument. Clearly, there are some other considerations (for which I agree completely) but those don't prevent you increasing shared_buffers, they just place limits on your overall memory budget which could effect shared_buffers of course. Best Regards, Simon Riggs
On Mon, 2005-10-31 at 09:35 -0500, Tom Lane wrote: > Simon Riggs <simon@2ndquadrant.com> writes: > > On Mon, 2005-10-31 at 14:14 +0100, Martijn van Oosterhout wrote: > >> On Mon, Oct 31, 2005 at 12:16:59PM +0000, Simon Riggs wrote: > >>> I'm not sure we have any good tests of that either way, do we? I'm not > >>> certain why we would trust OS cache any more than we could trust the > >>> shared buffers. But setting it too high would probably overuse backend > >>> memory for most variable query workloads. > >> > >> Well, it comes down to a thought experiment. Any disk blocks you have in > >> the shared buffers will also be in the system cache. > > > Each have different and independent cache replacement... > > The real point is that RAM dedicated to shared buffers can't be used for > anything else [1], whereas letting the kernel manage it gives you some > flexibility (for instance, to deal with transient large memory demands > by individual backends, or from stuff unrelated to Postgres). A system > configured to give most of RAM to shared buffers might look good on > sufficiently narrow test cases, but its performance will be horribly > brittle: it will go into swap thrashing on any small provocation. The > extra 50usec or whatever to get stuff from a kernel disk buffer instead > of our own shared buffer is a good tradeoff to get flexibility in the > amount of stuff actually buffered at any one instant. Agreed. But that is an argument in favour of more easily controllable server memory management, not a definitive argument against setting shared_ buffers higher. > [1] unless you are on a platform where the kernel doesn't think SysV > shared memory should be locked in RAM. In that case, what you have is a > large arena that is subject to being swapped out ... and a disk buffer > that's been swapped to disk is demonstrably worse than no buffer at all. > (Hint: count the I/Os involved, especially when the page is dirty.) This is a disaster for any database, not just PostgreSQL. But most other DBMS do something about this, for example on Linux, Solaris, HP/UX, AIX a certain orange DBMS provides additional support for making shared memory non-swappable. Have other people used lock_sga = true in Oracle? Or do we think this is a benchmark gimmic that should never be used in production? We would need to issue a shmctl() with SHM_LOCK, which requires enabling the CAP_IPC_LOCK capability. Best Regards, Simon Riggs
Simon Riggs <simon@2ndquadrant.com> writes: > On Mon, 2005-10-31 at 09:35 -0500, Tom Lane wrote: >> The real point is that RAM dedicated to shared buffers can't be used for >> anything else [1], whereas letting the kernel manage it gives you some >> flexibility (for instance, to deal with transient large memory demands >> by individual backends, or from stuff unrelated to Postgres). > Agreed. But that is an argument in favour of more easily controllable > server memory management, not a definitive argument against setting > shared_ buffers higher. Well, as long as shared_buffers is a fixed parameter, it's an argument against setting shared_buffers higher ;-). But the larger point here is that Postgres does not have the knowledge needed to make the same kinds of memory tradeoffs that the kernel does. I think trying to usurp this kernel functionality would be exactly the wrong design direction for us to take. >> [1] unless you are on a platform where the kernel doesn't think SysV >> shared memory should be locked in RAM. > This is a disaster for any database, not just PostgreSQL. But most other > DBMS do something about this, for example on Linux, Solaris, HP/UX, AIX > a certain orange DBMS provides additional support for making shared > memory non-swappable. Yeah, and we should do that too on platforms where it can be done reasonably (ie, without root privs). regards, tom lane
On Mon, 2005-10-31 at 10:58, Simon Riggs wrote: > On Mon, 2005-10-31 at 15:44 +0100, Martijn van Oosterhout wrote: > > On Mon, Oct 31, 2005 at 01:34:12PM +0000, Simon Riggs wrote: > > > > Secondly, you're assuming that PostgreSQLs caching is at least as > > > > efficient as the OS caching, which is more of an assertion than > > > > anything else. > > > > > > Do you doubt that? Why would shared_buffers be variable otherwise? > > > > Because the optimal hasn't been found and is probably different for > > each machine. > > > > There have been tests that demonstrate that you can raise the buffers > > to a certain point which is optimal and after that it just doesn't > > help [1]. They peg optimal size at 5-10% of memory. > > Please read the rest of that thread. Those results and their conclusions > were refuted in some detail, which lead to a number of optimizations in > 8.0 and 8.1, mostly written by Tom. > > > Also, as Tom pointed out, any memory assigned to shared buffers can't > > be used for sorts, temporary tables, plain old disk caching, trigger > > queues or anything else that isn't shared between backends. There are > > far more useful uses of memory than just buffering disk blocks. > > Your point was about cache efficiency as an argument for not increasing > shared_buffers. Politely, I don't accept that argument. Clearly, there > are some other considerations (for which I agree completely) but those > don't prevent you increasing shared_buffers, they just place limits on > your overall memory budget which could effect shared_buffers of course. As I understand it, when the last backend referencing a collection of data stops referencing it, that the buffers holding that data are released, and if, a second later, another backend wants the data, then it has to go to the Kernel for it again. Is this still the case in 8.1?
On Mon, 2005-10-31 at 14:50 -0600, Scott Marlowe wrote: > As I understand it, when the last backend referencing a collection of > data stops referencing it, that the buffers holding that data are > released, and if, a second later, another backend wants the data, then > it has to go to the Kernel for it again. Unreferenced data is not immediately released to the kernel. When a backend requests a datablock that is not in shared_buffers it will select an unreferenced buffer, write it if required (hopefully not required because of the bgwriter), then overwrite the shared_buffer cache with the datablock it is trying to read from "disk". All reads and writes go through the OS cache, which does pretty much the same thing but with a different algorithm. So "disk" might just mean OS cache. There's zero *requirement* for the OS cache to be bigger than shared_buffers. Martijn and Tom discuss that there are a number of advantages to not overallocating shared_buffers, which is the reason why the usual recommendation is to not do that. Best Regards, Simon Riggs
On Mon, Oct 31, 2005 at 02:50:31PM -0600, Scott Marlowe wrote: > > Your point was about cache efficiency as an argument for not increasing > > shared_buffers. Politely, I don't accept that argument. Clearly, there > > are some other considerations (for which I agree completely) but those > > don't prevent you increasing shared_buffers, they just place limits on > > your overall memory budget which could effect shared_buffers of course. > > As I understand it, when the last backend referencing a collection of > data stops referencing it, that the buffers holding that data are > released, and if, a second later, another backend wants the data, then > it has to go to the Kernel for it again. > > Is this still the case in 8.1? Depends what you mean. What one backend uses stays in the shared buffers when it's done. It's only removed to make room for other blocks that have been requested. Whether it's still there after a second kind of depends on how much other data you read in the meantime and whether the caching algorithm decided the data was old enough that you wern't likely to need it soon. It's kind of like the kernel cache, once you've been running for a while it's always full of blocks of data. There's no point forgetting perfectly good data. The only time you don't need to throw away blocks is if your database is smaller than your memory, You mentioned something about those OSDL tests, where can we download the results? I just get told khack.osdl.org is unreachable... -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
On Mon, 2005-10-31 at 15:44, Simon Riggs wrote: > On Mon, 2005-10-31 at 14:50 -0600, Scott Marlowe wrote: > > > As I understand it, when the last backend referencing a collection of > > data stops referencing it, that the buffers holding that data are > > released, and if, a second later, another backend wants the data, then > > it has to go to the Kernel for it again. > > Unreferenced data is not immediately released to the kernel. When a > backend requests a datablock that is not in shared_buffers it will > select an unreferenced buffer, write it if required (hopefully not > required because of the bgwriter), then overwrite the shared_buffer > cache with the datablock it is trying to read from "disk". All reads and > writes go through the OS cache, which does pretty much the same thing > but with a different algorithm. So "disk" might just mean OS cache. Hence the reason I carefully hedged my reference as "getting it from the kernel." I wasn't really wanting to discuss how the kernel manages to make it magically appear, as it's the kernel's job to do it and keep track of it. The point behind my post was that the kernel caches AND buffers, while postgresql technically only really seems to buffer, with a little incidental caching thrown in if you catch it at the right time. I was mainly wondering if that behaviour had changed, if, when the data are released, they are still held in shared memory until forced out by newer / more popular data. Which would make the buffer pool a real cache. As long as postgresql releases hold on all those buffers when they're not needed, I would think it was a buffer, not a real cache, and it shouldn't normally be tuned as a cache.
Scott Marlowe <smarlowe@g2switchworks.com> writes: > I was mainly wondering if that behaviour had changed, if, when the data > are released, they are still held in shared memory until forced out by > newer / more popular data. Which would make the buffer pool a real > cache. Huh? It's always done that. regards, tom lane
On Mon, 2005-10-31 at 16:12, Tom Lane wrote: > Scott Marlowe <smarlowe@g2switchworks.com> writes: > > I was mainly wondering if that behaviour had changed, if, when the data > > are released, they are still held in shared memory until forced out by > > newer / more popular data. Which would make the buffer pool a real > > cache. Oh, sorry. I Was under the impression that once it wasn't needed the buffers just dropped the data completely. Thanks
On Mon, 2005-10-31 at 14:48 -0800, Chris Travers wrote: > Simon Riggs wrote: > >Your point was about cache efficiency as an argument for not increasing > >shared_buffers. Politely, I don't accept that argument. Clearly, there > >are some other considerations (for which I agree completely) but those > >don't prevent you increasing shared_buffers, they just place limits on > >your overall memory budget which could effect shared_buffers of course. > > > I can see some circumstances where it might make some sense to have high > shared buffer arrangements. > > However, I think that Tom and others are speaking to typical cases, and > I think you seem to be speaking to the case where you have a database > where you have many reads and only a few writes, and where a few tables > are far more often used that the rest. So it strikes me as an argument > against making such the general recommendation. Of course, if your > database benefits from turning off bgwriter and increasing shared > buffers, you might find that useful. Just be aware that it is likely to > be applicable only to a small subset of the PostgreSQL deployments. This all depends upon what you see as typical. I see more than one "typical" deployment - I see three, maybe more: - OLTP/ Current State data management - Data Warehouse - Log Archiver Each are fairly different in many respects, so I see few "general recommendations" that really do apply to everybody. So thats why I didn't attempt to make a general recommendation myself, just pointing out that you can if you want and there's nothing physically stopping you from putting shared_buffers high (in 8.1). Best Regards, Simon Riggs