Thread: stone-age maintenance procedures ;-)
Hi, lately I've been acused of "stone-age maintenance procedures". Hopefully I will find some help to develop from stone-age to roman empire. ;-) To optimize the performance and to minimize the maintainance needs I would like to ask you guys about some values in my postgresql.conf. select version(); PostgreSQL 7.3.4 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) (1 row) max_connections = 32 superuser_reserved_connections = 2 shared_buffers = 30000 max_fsm_relations = 10000 max_fsm_pages = 100000 max_locks_per_transaction = 128 wal_buffers = 32 sort_mem = 1024 vacuum_mem = 8192 checkpoint_segments = 16 # in logfile segments, min 1, 16MB each fsync = false open_datasync effective_cache_size = 20000 # typically 8KB each The total database size on disk is ~6GB. Some tables get changed *very* frequently. A nightly "vacuum full analyze" frees about 250,000 rows on each of three tables. And yes I will change to 7.4 this week. TIA Ulrich -- ------------------------------------------------------------ Relevant Traffic AB, Riddargatan 10, 11435 Stockholm, Sweden Tel. +46-8-6789750 http://www.relevanttraffic.se
A long time ago, in a galaxy far, far away, ulrich.wisser@relevanttraffic.se (Ulrich Wisser) wrote: > select version(); > > PostgreSQL 7.3.4 on i386-redhat-linux-gnu, compiled by GCC > i386-redhat-linux-gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) > (1 row) Happily, that's not _scary_ obsolete. There's still a few 7.1 instances around, which is "scary." > shared_buffers = 30000 > effective_cache_size = 20000 # typically 8KB each > fsync = false These three look interesting. 1. By turning fsync off, you're leaving yourself vulnerable to any kind of hardware problem that stops the machine _destroying_ the database. 2. There is little evidence that having shared_buffers higher than 10000 is of benefit, ever. You might want to drop that. 3. Are you sure that you have _less_ usable cache than you have shared buffers? That would seem surprising, and having more is quite likely to affect some query plans. > Some tables get changed *very* frequently. A nightly "vacuum full > analyze" frees about 250,000 rows on each of three tables. If you have the time to schedule the outage, I guess it's nice to be able to do this. Can you give some more stats on those three tables? In one of the applications I support, we have a "customer balance" table that gets lots of dead tuples, but we can keep that vacuumed by vacuuming it every five minutes, which goes lickety-split fast, and keeps there from being a lot of dead/wasted tuples. There is definitely merit to vacuuming important tables _very frequently_ to prevent growth rather than using VACUUM FULL. What with the changes between 7.2 and 7.4, numerous of the reasons to need VACUUM FULL or REINDEX have gone away. For the apps I support, we used to need to take the system down about every 2 months to REINDEX/VACUUM FULL a bunch of tables; that's no longer necessary, and I'd expect 8.0 to become a little better still. -- output = ("cbbrowne" "@" "ntlug.org") http://www.ntlug.org/~cbbrowne/linux.html Trying to be happy is like trying to build a machine for which the only specification is that it should run noiselessly.
Ulrich Wisser <ulrich.wisser@relevanttraffic.se> writes: > max_fsm_relations = 10000 > max_fsm_pages = 100000 > The total database size on disk is ~6GB. Most likely, you need larger max_fsm_pages. 6GB would work out to about 750K pages (of 8K each). With max_fsm_pages of 100K you are saying that you don't expect more than one page in eight to have interesting amounts of free space. That would be plenty for a low-turnover database ... but ... > Some tables get changed *very* frequently. A nightly "vacuum full > analyze" frees about 250,000 rows on each of three tables. ... that does not sound like a low-turnover database. > And yes I will change to 7.4 this week. Once you are on 7.4, you can do a database-wide VACUUM VERBOSE to get some stats about how loaded or overloaded the FSM is. Look at the last few lines of the (very voluminous) output. If you have several active databases, do a database-wide VACUUM in each of them and then look at the VERBOSE result for the last. You want "total pages needed" to be comfortably less than max_fsm_pages. regards, tom lane