Thread: Some vacuum & tuning help
I've been trying to search through the archives, but it hasn't been successful. We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm trying to fine tune things to get it running a bit better and I'm trying to figure out how vacuum output correlates to tuning parameters. Here's the msot recent vacuum for the "active" table. It gets a few hundred updates/inserts a minute constantly throughout the day. INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed 1003361. Total CPU 2.18s/0.61u sec elapsed 2.78 sec. I see unused is quite high. This morning I bumped max_fsm_pages to 500000. If I'm thinking right you want unused and max_fsm to be closish, right? (Yesterday it was down around.. oh.. 600k?) I'm thinking vacuum full's may be in order. Which stinks because I was hoping to do away with the db essentially down for 10 minutes (includes all the db's on that machine) while it vacuum'd. The upside is: it is performing great. During the vacuum analyze I do get a few multi-second pauses while something occurs. I figured it was a checkpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers to 128. (I'm just guessing on wal_buffers). Machine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive (mirrored). If you guys need other info (shared_buffers, etc) I'll be happy to funish them. but the issue isn't query slowness.. just want to get this thing oiled). thanks -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
On 5 Aug 2003 at 8:09, Jeff wrote: > I've been trying to search through the archives, but it hasn't been > successful. > > We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm > trying to fine tune things to get it running a bit better and I'm trying > to figure out how vacuum output correlates to tuning parameters. > > Here's the msot recent vacuum for the "active" table. It gets a few > hundred updates/inserts a minute constantly throughout the day. I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as well.. Or schedule a vacuum analyze every 15 minutes or so.. > > INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed > 1003361. > Total CPU 2.18s/0.61u sec elapsed 2.78 sec. > > I see unused is quite high. This morning I bumped max_fsm_pages to 500000. > If I'm thinking right you want unused and max_fsm to be closish, right? > (Yesterday it was down around.. oh.. 600k?) > > I'm thinking vacuum full's may be in order. Which stinks because I was > hoping to do away with the db essentially down for 10 minutes (includes > all the db's on that machine) while it vacuum'd. I think vacuum full is required. > The upside is: it is performing great. During the vacuum analyze I do get > a few multi-second pauses while something occurs. I figured it was a > checkpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers > to 128. (I'm just guessing on wal_buffers). If it is couple of tables that are that heavily killed, I would suggest to a pg_dump, drop table and reload table. That should take less time. Your downtime might not be 10 minutes but more like 15 say. That's a rough estimate.. > > Machine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive (mirrored). You mean linux? I guess you need a kernel revision for a long time. How about 2.4.21? > If you guys need other info (shared_buffers, etc) I'll be happy to funish > them. but the issue isn't query slowness.. just want to get this thing > oiled). See if this helps.. Bye Shridhar -- QOTD: "I thought I saw a unicorn on the way over, but it was just a horse with one of the horns broken off."
On Tue, 5 Aug 2003, Shridhar Daithankar wrote: > On 5 Aug 2003 at 8:09, Jeff wrote: > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as > well.. Or schedule a vacuum analyze every 15 minutes or so.. I've just got autovacum up and Since we have had a lot of talk about it recently..... I thought some feed back might be useful. It seams to work quite well. But can be rather zelous on its analysing for the first few hours. Curretly its analysig static (ie nothigs changed) tables every 10minites. Vacuums seam to be about right. I think that many vacuums may be slowing does my database.... Peter Childs
On Tue, 5 Aug 2003, Shridhar Daithankar wrote: > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as > well.. Or schedule a vacuum analyze every 15 minutes or so.. > > Good Call. I'll give that a whirl and let you know. > I think vacuum full is required. > D'oh. Would this be a regular thing? I suppose we could do it weekly. As for the pg_dumping of it. I suppose it would work on this table as it is only a couple million rows and not terribly big data-wise. The other tables in this db are rather big and a load is not fast. (It is about 8GB). thanks > You mean linux? I guess you need a kernel revision for a long time. How about > 2.4.21? > Yeah, linux. We're planning on upgrading when we relocate datacenters at the end of August. This machine has actually been up for 486 days (We're hoping to reach linux's uptime wraparound of 496 days :) and the only reason it went down then was because the power supply failed. (That can be read: pg7.0.2 had over a year of uptime. lets hope 7.3 works as good :) -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
On 5 Aug 2003 at 9:18, Jeff wrote: > As for the pg_dumping of it. I suppose it would work on this table as it > is only a couple million rows and not terribly big data-wise. The other > tables in this db are rather big and a load is not fast. (It is about > 8GB). You need to dump only those table which has unusualy high unused stats. If that is a small table, dump/reload it would be far faster than vacuum.. For others vacuum analyse should do.. > > You mean linux? I guess you need a kernel revision for a long time. How about > > 2.4.21? > > > Yeah, linux. We're planning on upgrading when we relocate datacenters at > the end of August. This machine has actually been up for 486 days (We're > hoping to reach linux's uptime wraparound of 496 days :) and the only > reason it went down then was because the power supply failed. (That can > be read: pg7.0.2 had over a year of uptime. lets hope 7.3 works as good :) Good to know that. AFAIK, the 496 wraparound is fixed in 2.6. So that won't be a complaint any longer.. Bye Shridhar -- Gravity: What you get when you eat too much and too fast.
On 5 Aug 2003 at 14:15, Peter Childs wrote: > On Tue, 5 Aug 2003, Shridhar Daithankar wrote: > > > On 5 Aug 2003 at 8:09, Jeff wrote: > > > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as > > well.. Or schedule a vacuum analyze every 15 minutes or so.. > > I've just got autovacum up and Since we have had a lot of talk > about it recently..... I thought some feed back might be useful. > It seams to work quite well. But can be rather zelous on its > analysing for the first few hours. Curretly its analysig static (ie > nothigs changed) tables every 10minites. Vacuums seam to be about right. > I think that many vacuums may be slowing does my database.... IIRC there is per operation threshold. If update threshold is 5% and table is 2% updatedit, then it should not look at it at all. It's worth mentioning that you should start auto vacuum daemon on a clean database. i.e. no pending vacuum. It is not supposed to start with a database which has lots of clean up pending. The essence of auto vacuum daemon is to maintain a clean database in clean state.. I agree, specifying per table thresholds would be good in autovacuum.. Bye Shridhar -- WYSIWYG: What You See Is What You Get.
On Tue, 5 Aug 2003, Shridhar Daithankar wrote: > On 5 Aug 2003 at 14:15, Peter Childs wrote: > > > On Tue, 5 Aug 2003, Shridhar Daithankar wrote: > > > > > On 5 Aug 2003 at 8:09, Jeff wrote: > > > > > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as > > > well.. Or schedule a vacuum analyze every 15 minutes or so.. > > > > I've just got autovacum up and Since we have had a lot of talk > > about it recently..... I thought some feed back might be useful. > > It seams to work quite well. But can be rather zelous on its > > analysing for the first few hours. Curretly its analysig static (ie > > nothigs changed) tables every 10minites. Vacuums seam to be about right. > > I think that many vacuums may be slowing does my database.... Sorry typo big time it should read "I think that may analysing may may be slowing down my database. > > IIRC there is per operation threshold. If update threshold is 5% and table is > 2% updatedit, then it should not look at it at all. I left it with debug over night and it improved to that after 5 hours. switch the debug down (to 1) this morning and it has not settled down yet. > > It's worth mentioning that you should start auto vacuum daemon on a clean > database. i.e. no pending vacuum. It is not supposed to start with a database > which has lots of clean up pending. The essence of auto vacuum daemon is to > maintain a clean database in clean state.. > > I agree, specifying per table thresholds would be good in autovacuum.. Peter Childs
Shridhar Daithankar wrote: > I agree, specifying per table thresholds would be good in autovacuum.. Which begs the question of what the future direction is for pg_autovacuum. There would be some merit to having pg_autovacuum throw in some tables in which to store persistent information, and at that point, it would make sense to add some flags to support the respective notions that: -> Some tables should _never_ be touched; -> Some tables might get "reset" to indicate that they should be considered as having been recently vacuumed, or perhaps that they badly need vacuuming; -> As you suggest, per-table thresholds; -> pg_autovacuum would know when tables were last vacuumed by it... -> You could record vacuum times to tell pg_autovacuum that you vacuumed something "behind its back." -> If the system queued up proposed vacuums by having a "queue" table, you could request that pg_autovacuum do a vacuum on a particular table at the next opportunity. All well and interesting stuff that could be worth implementing. But the usual talk has been about ultimately integrating the functionality into the backend, making it fairly futile to enhance pg_autovacuum terribly much. Unfortunately, the "integrate into the backend" thing has long seemed "just around the corner." I think we should either: a) Decide to enhance pg_autovacuum, or b) Not. In view of how long the "better answers" seem to be taking to emerge, I think it makes sense to add functionality to pg_autovacuum. -- output = reverse("ofni.smrytrebil" "@" "enworbbc") <http://dev6.int.libertyrms.com/> Christopher Browne (416) 646 3304 x124 (land)
Jeff <threshar@torgo.978.org> writes: > Here's the msot recent vacuum for the "active" table. It gets a few > hundred updates/inserts a minute constantly throughout the day. > INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed > 1003361. > Total CPU 2.18s/0.61u sec elapsed 2.78 sec. > I see unused is quite high. This morning I bumped max_fsm_pages to 500000. > If I'm thinking right you want unused and max_fsm to be closish, right? No, they're unrelated. UnUsed is the number of currently-unused tuple pointers in page headers, whereas the FSM parameters are measured in pages. 30000 FSM slots would be more than enough for this table. The above numbers don't seem terribly unreasonable to me, although probably UnUsed would be smaller if you'd been vacuuming more often. If you see UnUsed continuing to increase then you definitely ought to shorten the intervacuum time. VACUUM FULL does not reclaim unused tuple pointers AFAIR, except where it is able to release entire pages at the end of the relation. So if you really wanted to get back down to nil UnUsed, you'd need to do a dump and reload of the table (or near equivalent, such as CLUSTER). Not sure it's worth the trouble. regards, tom lane
On 5 Aug 2003 at 10:29, Christopher Browne wrote: > Shridhar Daithankar wrote: > > I agree, specifying per table thresholds would be good in autovacuum.. > > Which begs the question of what the future direction is for pg_autovacuum. > > There would be some merit to having pg_autovacuum throw in some tables > in which to store persistent information, and at that point, it would > make sense to add some flags to support the respective notions that: Well, the C++ version I wrote quite a while back, which resides on gborg and unmaintained, did that. It was considered as table pollution. However whenever autovacuum stuff goes in backend as such, it is going to need a catalogue. > -> Some tables should _never_ be touched; That can be determined runtime from stats. Not required as a special feature IMHO.. > > -> Some tables might get "reset" to indicate that they should be > considered as having been recently vacuumed, or perhaps that they > badly need vacuuming; Well, stats collector takes care of that. Autovacuum daemon reads that statistics, maintain a periodic snapshot of the same to determine whether or not it needs to vacuum. Why it crawls for a dirty database is as follows. Autovauum daemon starts, read statistics, sets it as base level and let a cycle pass, which is typically few minutes. When it goes again, it finds that lots of things are modified and need vacuum and so it triggers vacuum. Now vacuum goes on cleaning entire table which might be days job continously postponed some one reason or another. Oops.. your database is on it's knees.. > -> As you suggest, per-table thresholds; I would rather put it in terms of pages. If any table wastes 100 pages each, it deserves a vacuum.. > -> pg_autovacuum would know when tables were last vacuumed by > it... If you maintain a table in database, there are lot of things you can maintain. And you need to connect to database anyway to fire vacuum.. > -> You could record vacuum times to tell pg_autovacuum that you > vacuumed something "behind its back." It should notice.. > -> If the system queued up proposed vacuums by having a "queue" > table, you could request that pg_autovacuum do a vacuum on a > particular table at the next opportunity. That won't ever happen if autovacuum is constantly running.. > Unfortunately, the "integrate into the backend" thing has long seemed > "just around the corner." I think we should either: > a) Decide to enhance pg_autovacuum, or > b) Not. In fact, I would say that after we have autovacuum, we should not integrate it. It is a very handy tool of tighting a database. Other database go other way round. They develop maintance functionality built in and then create tool on top of it. Here we have it already done. It's just that it should be triggered by default. That would rock.. Bye Shridhar -- Bubble Memory, n.: A derogatory term, usually referring to a person's intelligence. See also "vacuum tube".
From: "Christopher Browne" <cbbrowne@libertyrms.info> > Shridhar Daithankar wrote: > > I agree, specifying per table thresholds would be good in autovacuum.. > > Which begs the question of what the future direction is for pg_autovacuum. This is a good question. > There would be some merit to having pg_autovacuum throw in some tables > in which to store persistent information As long as pg_autovacuum is either a contrib module, or not integrated into the backend, we can't do this. I don't think we should require that tables are added to your database in order to run pg_autovacuum, I have thought that a "helper table" could be used, this table, if found by pg_autovacuum would use it for per table defaults, exclusion list etc.... That way pg_autovacuum can run without a polluted database, or can be tuned. If pg_autovacuum in made official, moves out of contrib and becomes a core tool, then we can either add columns to some system catalogs to track this information or add a new system table. > All well and interesting stuff that could be worth implementing. > > But the usual talk has been about ultimately integrating the > functionality into the backend, making it fairly futile to enhance > pg_autovacuum terribly much. > > Unfortunately, the "integrate into the backend" thing has long seemed > "just around the corner." I think we should either: > a) Decide to enhance pg_autovacuum, or > b) Not. I have been talking about "integraging it into the backend" for a while, and I used to think it was "just around the corner" unfortunately, work schedule and my C skills have prevented me from getting anything useful working. If you would like to work on it, I would help as much as possible. I chose to leave pg_autovacuum simple and not add too many features because the core team has said that it needs to be integrated into the backend before it can be considered a core tool. ps, please cc me as I'm not subscribed to the list.
From: "Shridhar Daithankar" <shridhar_daithankar@persistent.co.in> > On 5 Aug 2003 at 10:29, Christopher Browne wrote: > > > Shridhar Daithankar wrote: > > There would be some merit to having pg_autovacuum throw in some tables > > in which to store persistent information, > > Well, the C++ version I wrote quite a while back, which resides on gborg and > unmaintained, did that. It was considered as table pollution. However whenever > autovacuum stuff goes in backend as such, it is going to need a catalogue. right, I think there is a distinction between adding a system catalogue needed for core functionality, and requiring a user to put a table in the users name space of their database just to run a utility. As I mentioned in my other email, I do think that a non-required helper table could be a good idea, the question is should we do this considering autovacuum should be integrated into the backend at which point pg_autovacuum will be scrapped. > > -> Some tables should _never_ be touched; > > That can be determined runtime from stats. Not required as a special feature > IMHO.. > > > > > -> Some tables might get "reset" to indicate that they should be > > considered as having been recently vacuumed, or perhaps that they > > badly need vacuuming; > > Well, stats collector takes care of that. Autovacuum daemon reads that > statistics, maintain a periodic snapshot of the same to determine whether or > not it needs to vacuum. Actually I don't think that pg_autovacuum will notice. The stats that it watches no nothing about when a table is vacuumed outside of pg_autovacuum. I agree this is a deficiency, but I don't know how to get that information without being part of the backend. > Why it crawls for a dirty database is as follows. Autovauum daemon starts, read > statistics, sets it as base level and let a cycle pass, which is typically few > minutes. When it goes again, it finds that lots of things are modified and need > vacuum and so it triggers vacuum. > > Now vacuum goes on cleaning entire table which might be days job continously > postponed some one reason or another. Oops.. your database is on it's knees.. If one table takes days (or even hours) to vacuum, then most probably it requires *a lot* of activity before pg_autovacuum will try to vacuum the table. The thresholds are based on two factors, a base value (default = 1,000), and a multiplier (default = 2) of the total number of tuples. So using the default pg_autovacuum settings, a table with 1,000,000 rows will not be vacuumed until the number of rows updated or deleted = 2,001,000. So, a table shouldn't be vacuumed until it really needs it. This setup works well since a small table of say 100 rows, will be updated every 1,200 (updates or deletes). > > > -> As you suggest, per-table thresholds; > > I would rather put it in terms of pages. If any table wastes 100 pages each, it > deserves a vacuum.. unfortunately I don't know of an efficient method of looking at how many pages have free space without running vacuum or without using the pgstattuple contrib module which in my testing took about 90% as long to run as vacuum. > > -> pg_autovacuum would know when tables were last vacuumed by > > it... pg_autovacuum already does this, but the data does not persist through pg_autovacuum restarts. > If you maintain a table in database, there are lot of things you can maintain. > And you need to connect to database anyway to fire vacuum.. > > > -> You could record vacuum times to tell pg_autovacuum that you > > vacuumed something "behind its back." > > It should notice.. I don't think it does. > > -> If the system queued up proposed vacuums by having a "queue" > > table, you could request that pg_autovacuum do a vacuum on a > > particular table at the next opportunity. That would be a design changes as right now pg_autovacuum doesn't keep a list of tables to vacuum at all, it just decides to vacuum or not vacuum a table as it loops through the database. > > Unfortunately, the "integrate into the backend" thing has long seemed > > "just around the corner." I think we should either: > > a) Decide to enhance pg_autovacuum, or > > b) Not. I have been of the opinion to not enhance pg_autovacuum because it needs to be intgrated, and enhancing it will only put that off. Also, I think many of the real enhancements can only come from being integrated (using the FSM to make decisions, keepting track of external vacuums, modifying system catalogs to keep autovacuum information etc...) > In fact, I would say that after we have autovacuum, we should not integrate it. > It is a very handy tool of tighting a database. Other database go other way > round. They develop maintance functionality built in and then create tool on > top of it. Here we have it already done. I'm not sure I understand your point. > It's just that it should be triggered by default. That would rock.. I agree that if pg_autovacuum becomes a core tool (not contrib and not integrated into backend) then pg_ctl should fire it up and kill it automatically.
"Matthew T. O'Connor" <matthew@zeut.net> writes: > I chose to leave pg_autovacuum simple and not add too many features because > the core team has said that it needs to be integrated into the backend > before it can be considered a core tool. I think actually it makes plenty of sense to enhance pg_autovacuum while it's still contrib stuff. My guess is it'll be much less painful to whack it around in minor or major ways while it's standalone code. Once it's integrated in the backend, making significant changes will be harder and more ticklish. So, now is precisely the time to be experimenting to find out what works well and what features are needed. regards, tom lane
From: "Tom Lane" <tgl@sss.pgh.pa.us> > "Matthew T. O'Connor" <matthew@zeut.net> writes: > > I chose to leave pg_autovacuum simple and not add too many features because > > the core team has said that it needs to be integrated into the backend > > before it can be considered a core tool. > > I think actually it makes plenty of sense to enhance pg_autovacuum while > it's still contrib stuff. My guess is it'll be much less painful to > whack it around in minor or major ways while it's standalone code. > Once it's integrated in the backend, making significant changes will be > harder and more ticklish. So, now is precisely the time to be > experimenting to find out what works well and what features are needed. Fair point, my only concern is that a backend integrated pg_autovacuum would be radically different from the current libpq based client application. When integrated into the backend you have access to a lot of information that you don't have access to as a client. I know one goal I have for the backend version is to be based on the FSM and not require the stats collector since it has a measurable negative effect on performance. But in the more general sense of learning what features people want (exclusion lists, per table defaults etc) I agree the current version is a sufficient testing ground.
From: "Tom Lane" <tgl@sss.pgh.pa.us> > "Matthew T. O'Connor" <matthew@zeut.net> writes: > So, now is precisely the time to be experimenting to find out what works well and what features are needed. Another quick question while I have your attention :-) Since pg_autovaccum is a contrib module does that mean I can make functional changes that will be included in point release of 7.4?
"Matthew T. O'Connor" <matthew@zeut.net> writes: > Since pg_autovaccum is a contrib module does that mean I can make functional > changes that will be included in point release of 7.4? Well, the bar is lower for contrib stuff than for core, but you'd better get such changes in PDQ, I'd say ... regards, tom lane
Tom Lane wrote: > "Matthew T. O'Connor" <matthew@zeut.net> writes: > > Since pg_autovaccum is a contrib module does that mean I can make functional > > changes that will be included in point release of 7.4? > > Well, the bar is lower for contrib stuff than for core, but you'd better > get such changes in PDQ, I'd say ... The contrib stuff is usually at the control of the author, so you can make changes relatively late. However, the later the changes, the less testing they get, but the decision is mostly yours, rather than core. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Matthew T. O'Connor wrote: > Fair point, my only concern is that a backend integrated > pg_autovacuum would be radically different from the current libpq > based client application. Unfortunately, a "configurable-via-tables" pg_autovacuum is also going to be quite different from the current "unconfigurable" version. If we were to make it configurable, I would suggest doing so via specifying a database and schema into which it would then insert a set of tables to provide whatever information was considered worth 'fiddling' with. But at that point, it makes sense to add in quite a bit of "configurable" behaviour, such as: -> Specifying that certain tables should _never_ be automatically vacuumed. -> Establishing a "queue" of tables that pg_autovacuum plans to vacuum, so that users could add in desired vacuums ("after the other stuff being handled, force in a vacuum of app_table_foo"). That way, vacuums can be 'forced in' without introducing the possibility that multiple vacuums might be done at once... -> Making information about what vacuums have been done/planned persistent across runs of pg_autovacuum, and even across shutdowns of the DBMS. This changes behaviour enough that I'm not sure it's the same "program" as the unconfigurable version. Almost every option would be substantially affected by the logic: if (CONFIG_DATA_IN_DB) { /* Logic path that uses data in Vacuum Schema */ } else { /* More banal logic */ } If I can store configuration in the database, then I'd like to also make up a view or two, and possibly even base the logic used on views that combine configuration tables with system views. In effect, that makes for a _third_ radically different option. -- output = reverse("ofni.smrytrebil" "@" "enworbbc") <http://dev6.int.libertyrms.com/> Christopher Browne (416) 646 3304 x124 (land)
Try the pg_autovacuum daemon in CVS contrib dir. It works fine with 7.3. Chris ----- Original Message ----- From: "Jeff" <threshar@torgo.978.org> To: <pgsql-performance@postgresql.org> Sent: Tuesday, August 05, 2003 8:09 PM Subject: [PERFORM] Some vacuum & tuning help > I've been trying to search through the archives, but it hasn't been > successful. > > We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm > trying to fine tune things to get it running a bit better and I'm trying > to figure out how vacuum output correlates to tuning parameters. > > Here's the msot recent vacuum for the "active" table. It gets a few > hundred updates/inserts a minute constantly throughout the day. > > INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed > 1003361. > Total CPU 2.18s/0.61u sec elapsed 2.78 sec. > > I see unused is quite high. This morning I bumped max_fsm_pages to 500000. > If I'm thinking right you want unused and max_fsm to be closish, right? > (Yesterday it was down around.. oh.. 600k?) > > I'm thinking vacuum full's may be in order. Which stinks because I was > hoping to do away with the db essentially down for 10 minutes (includes > all the db's on that machine) while it vacuum'd. > > The upside is: it is performing great. During the vacuum analyze I do get > a few multi-second pauses while something occurs. I figured it was a > checkpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers > to 128. (I'm just guessing on wal_buffers). > > Machine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive (mirrored). > If you guys need other info (shared_buffers, etc) I'll be happy to funish > them. but the issue isn't query slowness.. just want to get this thing > oiled). > > thanks > > -- > Jeff Trout <jeff@jefftrout.com> > http://www.jefftrout.com/ > http://www.stuarthamm.net/ > > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org >
On Tue, 2003-08-05 at 12:49, Bruce Momjian wrote: > > Well, the bar is lower for contrib stuff than for core, but you'd better > > get such changes in PDQ, I'd say ... > > The contrib stuff is usually at the control of the author, so you can > make changes relatively late. However, the later the changes, the less > testing they get, but the decision is mostly yours, rather than core. Well I don't have anything in the hopper right now, so there is little chance anything would be ready before the release. My really question was can I make large changes to a contrib module to a point release, meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much improved version. Does that sound possible? Or too radical for a point release?
Matthew T. O'Connor wrote: > On Tue, 2003-08-05 at 12:49, Bruce Momjian wrote: > > > Well, the bar is lower for contrib stuff than for core, but you'd better > > > get such changes in PDQ, I'd say ... > > > > The contrib stuff is usually at the control of the author, so you can > > make changes relatively late. However, the later the changes, the less > > testing they get, but the decision is mostly yours, rather than core. > > Well I don't have anything in the hopper right now, so there is little > chance anything would be ready before the release. My really question > was can I make large changes to a contrib module to a point release, > meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much > improved version. Does that sound possible? Or too radical for a point > release? Yes, that is possible, but you should try to get lots of testers because there is little testing in minor releases. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Tue, 2003-08-05 at 17:40, Christopher Browne wrote: > Unfortunately, a "configurable-via-tables" pg_autovacuum is also going > to be quite different from the current "unconfigurable" version. true, however I would like to preserve the "unconfigured" functionality so that it can be run against a totally unmodified database cluster. If it finds configuration information on the server then it uses it, otherwise it just acts as it does now. > But at that point, it makes sense to add in quite a bit of > "configurable" behaviour, such as: > > -> Specifying that certain tables should _never_ be automatically > vacuumed. agreed > -> Establishing a "queue" of tables that pg_autovacuum plans to > vacuum, so that users could add in desired vacuums ("after the > other stuff being handled, force in a vacuum of app_table_foo"). > That way, vacuums can be 'forced in' without introducing the > possibility that multiple vacuums might be done at once... makes sense. > -> Making information about what vacuums have been done/planned > persistent across runs of pg_autovacuum, and even across > shutdowns of the DBMS. good. > This changes behaviour enough that I'm not sure it's the same > "program" as the unconfigurable version. Almost every option would be > substantially affected by the logic: > > if (CONFIG_DATA_IN_DB) { > /* Logic path that uses data in Vacuum Schema */ > } else { > /* More banal logic */ > } I'm not so sure it's that different. In either case we are going to have a threshold and decide to vacuum based on that threshold. The change is only that the data would be persistent, and could be customized on a per table basis. The logic only really changes if running unconfigured uses different data than the configured version, which I don't see as being proposed. > If I can store configuration in the database, then I'd like to also > make up a view or two, and possibly even base the logic used on views > that combine configuration tables with system views. In effect, that > makes for a _third_ radically different option. Not sure I see what all you are implying here. Please expand on this if you deem it worthy. I guess I'll start coding again. I'll send an email to the hackers list tomorrow evening with as much info / design as I can think of. Matthew
> On Tue, 2003-08-05 at 17:40, Christopher Browne wrote: > > Unfortunately, a "configurable-via-tables" pg_autovacuum is also going > > to be quite different from the current "unconfigurable" version. You don't need to create actual tables - just use 'virtual' tables, like the pg_settings one. That's all based off a set-returning-function. You can use updates and inserts to manipulate internal data structures or something... Chris
"Matthew T. O'Connor" <matthew@zeut.net> writes: > ... My really question > was can I make large changes to a contrib module to a point release, > meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much > improved version. Does that sound possible? For core code, the answer would be a big NYET. We do not do feature additions in point releases, only bug fixes. While contrib code is more under the author's control than the core committee's control, I'd still say that you'd be making a big mistake to not follow that basic guideline. People expect release x.y.z+1 to be the same as x.y.z except for bug fixes. Introducing any new bugs into x.y.z+1 would cause a large loss in your credibility. (speaking as one who's introduced new bugs into a point-release recently, and is still embarrassed about it, even though the intent was only to fix older bugs...) regards, tom lane
On Wed, Aug 06, 2003 at 12:45:34AM -0400, Tom Lane wrote: > For core code, the answer would be a big NYET. We do not do feature > additions in point releases, only bug fixes. While contrib code is more > under the author's control than the core committee's control, I'd still > say that you'd be making a big mistake to not follow that basic > guideline. People expect release x.y.z+1 to be the same as x.y.z except > for bug fixes. Introducing any new bugs into x.y.z+1 would cause a > large loss in your credibility. ... and since contrib packages are distributed along with PG, it would also be a loss to PG's credibility. IMHO, core should disallow feature additions in point releases for contrib modules, as well as the core code, except for very unusual situations. If contrib authors don't like this facet of our release engineering process, they can always distribute their code via some other outlet (gborg, SF, etc.). -Neil
On Wed, 2003-08-06 at 00:45, Tom Lane wrote: > For core code, the answer would be a big NYET. We do not do feature > additions in point releases, only bug fixes. While contrib code is more > under the author's control than the core committee's control, I'd still > say that you'd be making a big mistake to not follow that basic > guideline. People expect release x.y.z+1 to be the same as x.y.z except > for bug fixes. Introducing any new bugs into x.y.z+1 would cause a > large loss in your credibility. > > (speaking as one who's introduced new bugs into a point-release > recently, and is still embarrassed about it, even though the intent > was only to fix older bugs...) Right, OK, that is basically the answer I was expecting, but thought I would ask. Matthew