Thread: AutoVacuum Behaviour Question
Hi All, I have enabled autovacuum in our PostgreSQL cluster of databases. What I have noticed is that the autovacuum process keeps selecting the same database to perform autovacuums on and does not select any of the others within the cluster. Is this normal behaviour or do I need to do something more elaborate with my settings? Our main concern is the "blueface-service" database. The sipaccounts table has some high traffic, mainly updates. At the end of an average day's run without autovacuum this table, which is normally around 20MB gets bloated to around 2.2GB (now, imagine a busy day) at which point our nightly "cluster" cleans it up. However, we would like the autovacuum to be more stringent with this particular table and keep the bloat to a minimum. Our setup is as follows: OS version: Solaris 10 Update 3 DB version: PostgreSQL 8.2.4 I have checked the pg_catalog.pg_stat_all_tables view in each database and the autovacuum/autoanalyze field is null for all our databases except the blueface-crm one. The autovacuum does appear to be running, but only selecting one database each time. ------------------------------ Log Excerpt ------------------------------ DEBUG: autovacuum: processing database "blueface-crm" DEBUG: autovacuum: processing database "blueface-crm" DEBUG: autovacuum: processing database "blueface-crm" DEBUG: autovacuum: processing database "blueface-crm" DEBUG: autovacuum: processing database "blueface-crm" DEBUG: autovacuum: processing database "blueface-crm" -------------------------------------------- Auto Vacuum Settings: -------------------------------------------- autovacuum = on autovacuum_naptime = 1min autovacuum_vacuum_threshold = 500 autovacuum_analyze_threshold = 250 autovacuum_vacuum_scale_factor = 0.2 autovacuum_analyze_scale_factor = 0.1 autovacuum_freeze_max_age = 200000000 autovacuum_vacuum_cost_delay = -1 autovacuum_vacuum_cost_limit = -1 stats_command_string = on update_process_title = on stats_start_collector = on stats_block_level = on stats_row_level = on stats_reset_on_server_start = off vacuum_cost_delay = 0 vacuum_cost_limit = 200 log_min_messages = debug1 If you require any additional info I'd be happy to pass it along. Thanks Bruce
Bruce McAlister wrote: > Hi All, > > I have enabled autovacuum in our PostgreSQL cluster of databases. What I > have noticed is that the autovacuum process keeps selecting the same > database to perform autovacuums on and does not select any of the others > within the cluster. Is this normal behaviour or do I need to do > something more elaborate with my settings? There are two reasons autovacuum would keep picking up the same database: 1. the other databases do not have pgstat entries. 2. this database is in danger of Xid wraparound and the vacuum run fails to complete for some reason. > Our main concern is the "blueface-service" database. The sipaccounts > table has some high traffic, mainly updates. Are there non-null values in the pg_stat views for tables in blueface-service database? If there are, then you can discard (1) as the problem. If all values are nulls for all tables, then you have the stats collector disabled for that database, or something (maybe by ALTER DATABASE ... SET). In this case, reenable it and issue a manual VACUUM so that pgstat is populated. (I think the easiest way to check is SELECT datname, datconfig FROM pg_database). Regarding (2) you would need to check whether the autovacuum run dies with an ERROR. I'd advise setting a log_line_prefix that included the PID (%p) so that you can check whether the process goes away cleanly or it dies early. It is not impossible that a corrupt index or table is causing autovacuum to die, but it should certainly show up in the logs. -- Alvaro Herrera Valdivia, Chile ICBM: S 39º 49' 18.1", W 73º 13' 56.4" "El que vive para el futuro es un iluso, y el que vive para el pasado, un imbécil" (Luis Adler, "Los tripulantes de la noche")
Alvaro Herrera wrote: > Bruce McAlister wrote: >> Hi All, >> >> I have enabled autovacuum in our PostgreSQL cluster of databases. What I >> have noticed is that the autovacuum process keeps selecting the same >> database to perform autovacuums on and does not select any of the others >> within the cluster. Is this normal behaviour or do I need to do >> something more elaborate with my settings? > > There are two reasons autovacuum would keep picking up the same > database: > > 1. the other databases do not have pgstat entries. > > 2. this database is in danger of Xid wraparound and the vacuum run > fails to complete for some reason. > >> Our main concern is the "blueface-service" database. The sipaccounts >> table has some high traffic, mainly updates. > > Are there non-null values in the pg_stat views for tables in > blueface-service database? If there are, then you can discard (1) as > the problem. If all values are nulls for all tables, then you have the > stats collector disabled for that database, or something (maybe by ALTER > DATABASE ... SET). In this case, reenable it and issue a manual VACUUM > so that pgstat is populated. (I think the easiest way to check is > SELECT datname, datconfig FROM pg_database). I have just checked the pg_stat_all_tables in the pg_catalog schema and I can see the index scans etc table values incrementing. The data in the tables seems to be updating. Just an FYI, I've enabled manual vacuum analyze runs on the blueface-service database up until we've found whats going wrong here. The output from the select query you suggested is as follows: datname | datconfig -------------------------+----------- postgres | blueface-webmail | blueface-billingreports | blueface-service | blueface-cards | template1 | template0 | blueface-crmsupport | blueface-qualmon | asterisk-cdrgw | hylafax | thelab-sipswitch | whitelabel-ibb | whitelabel-pleasant | whitelabel-rapid | whitelabel-test | whitelabel-worlddest | blueface-crm | blueface-billedcalls | asterisk-cdr | mysipswitch | whitelabel-ice | > > Regarding (2) you would need to check whether the autovacuum run dies > with an ERROR. I'd advise setting a log_line_prefix that included the > PID (%p) so that you can check whether the process goes away cleanly or > it dies early. It is not impossible that a corrupt index or table is > causing autovacuum to die, but it should certainly show up in the logs. > I enabled the log_line_prefix option and put in a %p in the value, and I only get the following output from the logs: Jun 26 23:46:07 bfiedb01 postgres[2836]: [ID 748848 local0.debug] [5-1] 2836 DEBUG: autovacuum: processing database "blueface-crm" Jun 26 23:47:32 bfiedb01 postgres[2863]: [ID 748848 local0.debug] [5-1] 2863 DEBUG: autovacuum: processing database "blueface-crm" I cant really tell from these logs if the process is dying early or not. I have also just run a 'REINDEX DATABASE "blueface-crm";' just to ensure that the indexes are sane. I've also just run a manual 'VACUUM ANALYZE FULL VERBOSE;' just to ensure it goes through manually. What does the DEBUG1 output of a normal autovacuum run look like in the log file? Any other suggestions would be greatly appreciated. Thanks
Bruce McAlister wrote: > I have just checked the pg_stat_all_tables in the pg_catalog schema and > I can see the index scans etc table values incrementing. The data in the > tables seems to be updating. Just an FYI, I've enabled manual vacuum > analyze runs on the blueface-service database up until we've found whats > going wrong here. The output from the select query you suggested is as > follows: > > datname | datconfig > -------------------------+----------- > postgres | > blueface-webmail | [etc] Ok so it's not that you manually disabled autovacuum. And pgstat is working on those databases. And all databases share the postgresql.conf autovacuum configuration. > I enabled the log_line_prefix option and put in a %p in the value, and I > only get the following output from the logs: > > Jun 26 23:46:07 bfiedb01 postgres[2836]: [ID 748848 local0.debug] [5-1] > 2836 DEBUG: autovacuum: processing database "blueface-crm" > Jun 26 23:47:32 bfiedb01 postgres[2863]: [ID 748848 local0.debug] [5-1] > 2863 DEBUG: autovacuum: processing database "blueface-crm" I assume that there is no other line for process 2836 before the line for process 2863. Can you recheck that? What does select datname, age(datfrozenxid) from pg_database; show? > I've also just run a manual 'VACUUM ANALYZE FULL VERBOSE;' just to > ensure it goes through manually. And it does finish successfully? > What does the DEBUG1 output of a normal autovacuum run look like in the > log file? Nothing interesting shows up: LOG: autovacuum: processing database "test1" LOG: autovacuum: processing database "test2" If you try with debug2, it looks a bit more interesting: LOG: autovacuum: processing database "test2" DEBUG: autovac: will VACUUM foo DEBUG: vacuuming "public.foo" DEBUG: "foo": removed 10000 row versions in 55 pages DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec. DEBUG: "foo": found 10000 removable, 0 nonremovable row versions in 55 pages DETAIL: 0 dead row versions cannot be removed yet. There were 0 unused item pointers. 0 pages are entirely empty. CPU 0.00s/0.00u sec elapsed 0.00 sec. DEBUG: "foo": truncated 55 to 0 pages DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec. The only thing I can think of right now is that pgstats does not have entries for the other databases for some reason. How can that happen escapes me. -- Alvaro Herrera Developer, http://www.PostgreSQL.org/ "Doing what he did amounts to sticking his fingers under the hood of the implementation; if he gets his fingers burnt, it's his problem." (Tom Lane)
Alvaro Herrera wrote: > Bruce McAlister wrote: > >> I have just checked the pg_stat_all_tables in the pg_catalog schema and >> I can see the index scans etc table values incrementing. The data in the >> tables seems to be updating. Just an FYI, I've enabled manual vacuum >> analyze runs on the blueface-service database up until we've found whats >> going wrong here. The output from the select query you suggested is as >> follows: >> >> datname | datconfig >> -------------------------+----------- >> postgres | >> blueface-webmail | > [etc] > > Ok so it's not that you manually disabled autovacuum. And pgstat is > working on those databases. And all databases share the postgresql.conf > autovacuum configuration. > Yes, thats correct, all those databases fall under a single postgresql.conf configuration file. >> I enabled the log_line_prefix option and put in a %p in the value, and I >> only get the following output from the logs: >> >> Jun 26 23:46:07 bfiedb01 postgres[2836]: [ID 748848 local0.debug] [5-1] >> 2836 DEBUG: autovacuum: processing database "blueface-crm" >> Jun 26 23:47:32 bfiedb01 postgres[2863]: [ID 748848 local0.debug] [5-1] >> 2863 DEBUG: autovacuum: processing database "blueface-crm" > > I assume that there is no other line for process 2836 before the line > for process 2863. Can you recheck that? > No, there are no other lines between the two autovacuum runs. This is a more recent snippet, straight from the log file, no modifications: Jun 27 08:01:03 bfiedb01 postgres[15801]: [ID 748848 local0.debug] [5-1] 15801 DEBUG: autovacuum: processing database "blueface-crm" Jun 27 08:02:57 bfiedb01 postgres[15826]: [ID 748848 local0.debug] [5-1] 15826 DEBUG: autovacuum: processing database "blueface-crm" Jun 27 08:04:55 bfiedb01 postgres[15871]: [ID 748848 local0.debug] [5-1] 15871 DEBUG: autovacuum: processing database "blueface-crm" > What does > select datname, age(datfrozenxid) from pg_database; > show? > select datname, age(datfrozenxid) from pg_database; datname | age -------------------------+----------- postgres | 103837746 blueface-webmail | 103851569 blueface-billingreports | 103943960 blueface-service | 100002166 blueface-cards | 103948279 template1 | 103831712 template0 | 387945736 blueface-crmsupport | 103933017 blueface-qualmon | 103881267 asterisk-cdrgw | 103959639 hylafax | 103847354 thelab-sipswitch | 103827152 whitelabel-ibb | 103813843 whitelabel-pleasant | 103796261 whitelabel-rapid | 103791708 whitelabel-test | 103787680 whitelabel-worlddest | 103782784 blueface-crm | 441746613 blueface-billedcalls | 100127483 asterisk-cdr | 100004575 mysipswitch | 103842683 whitelabel-ice | 103805834 > >> I've also just run a manual 'VACUUM ANALYZE FULL VERBOSE;' just to >> ensure it goes through manually. > > And it does finish successfully? > Yes, the full vacuum completed successfully. > >> What does the DEBUG1 output of a normal autovacuum run look like in the >> log file? > > Nothing interesting shows up: > > LOG: autovacuum: processing database "test1" > LOG: autovacuum: processing database "test2" > > If you try with debug2, it looks a bit more interesting: > > LOG: autovacuum: processing database "test2" > DEBUG: autovac: will VACUUM foo > DEBUG: vacuuming "public.foo" > DEBUG: "foo": removed 10000 row versions in 55 pages > DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec. > DEBUG: "foo": found 10000 removable, 0 nonremovable row versions in 55 pages > DETAIL: 0 dead row versions cannot be removed yet. > There were 0 unused item pointers. > 0 pages are entirely empty. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > DEBUG: "foo": truncated 55 to 0 pages > DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec. > I will run with DEBUG2 for a while and see if my output looks anything like this :) > > The only thing I can think of right now is that pgstats does not have > entries for the other databases for some reason. How can that happen > escapes me. > If you need any information to try and get to the bottom of it all, then please let me know. It would be nice to return to autovacuum runs :)
Bruce McAlister wrote: Ok now this is interesting: > select datname, age(datfrozenxid) from pg_database; > datname | age > -------------------------+----------- > blueface-crm | 441746613 Note this value is 440 million, and you said in your original report that > autovacuum_freeze_max_age = 200000000 200 million. So this database is being selected each time because of this. However, what should happen is that after the vacuum the age of the database is decreased after the vacuuming. What's your vacuum_freeze_min_age setting? -- Alvaro Herrera http://www.flickr.com/photos/alvherre/ "Everything that I think about is more fascinating than the crap in your head." (Dogbert's interpretation of blogger philosophy)
Alvaro Herrera wrote: > Bruce McAlister wrote: > > Ok now this is interesting: > >> select datname, age(datfrozenxid) from pg_database; >> datname | age >> -------------------------+----------- >> blueface-crm | 441746613 > > Note this value is 440 million, and you said in your original report that > >> autovacuum_freeze_max_age = 200000000 > > 200 million. So this database is being selected each time because of > this. > Ahhh okay, I didnt know how to extract the age for a database. Learnt something new here. > However, what should happen is that after the vacuum the age of the > database is decreased after the vacuuming. What's your > vacuum_freeze_min_age setting? > My *_freeze_* values are: autovacuum_freeze_max_age = 200000000 #vacuum_freeze_min_age = 100000000 The vacuum_freeze_min_age is the default at 100 million (I assume). How much is the age decremented by on a vacuum run then?
Bruce McAlister wrote: > Alvaro Herrera wrote: > > Bruce McAlister wrote: > > > > Ok now this is interesting: > > > >> select datname, age(datfrozenxid) from pg_database; > >> datname | age > >> -------------------------+----------- > >> blueface-crm | 441746613 > > > > Note this value is 440 million, and you said in your original report that > > > >> autovacuum_freeze_max_age = 200000000 > > > > 200 million. So this database is being selected each time because of > > this. > > > > Ahhh okay, I didnt know how to extract the age for a database. Learnt > something new here. > > > However, what should happen is that after the vacuum the age of the > > database is decreased after the vacuuming. What's your > > vacuum_freeze_min_age setting? > > My *_freeze_* values are: > > autovacuum_freeze_max_age = 200000000 > #vacuum_freeze_min_age = 100000000 > > The vacuum_freeze_min_age is the default at 100 million (I assume). What do you get from a SHOW vacuum_freeze_min_age? That would tell you what's the actual value in use. Most likely it's those 100 million but if you change it, reload, then comment it back in the file and reload again, the value in use will be the one to which you first changed it. > How much is the age decremented by on a vacuum run then? It should be decremented to the vacuum_freeze_min_age. However, I'm running some experiments with your settings and apparently it's not working as it should. -- Alvaro Herrera http://www.PlanetPostgreSQL.org/ "Hay que recordar que la existencia en el cosmos, y particularmente la elaboración de civilizaciones dentro de él no son, por desgracia, nada idílicas" (Ijon Tichy)
Alvaro Herrera wrote: > > How much is the age decremented by on a vacuum run then? > > It should be decremented to the vacuum_freeze_min_age. However, I'm > running some experiments with your settings and apparently it's not > working as it should. Nah, false alarm, it's working as expected for me. And I see the age of databases being correctly decreased to the freeze min age (plus however many transactions it took to do the vacuuming work). So I'm still at a loss on why is it failing to advance the datfrozenxid of your database. Please let me have a look at this query result while connected to that database: select relname, relfrozenxid from pg_class where relkind in ('r', 't'); You can change the relname to oid if showing the table names is problematic for you. -- Alvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC Al principio era UNIX, y UNIX habló y dijo: "Hello world\n". No dijo "Hello New Jersey\n", ni "Hello USA\n".
Alvaro Herrera wrote: > Bruce McAlister wrote: >> Alvaro Herrera wrote: >>> Bruce McAlister wrote: >>> >>> Ok now this is interesting: >>> >>>> select datname, age(datfrozenxid) from pg_database; >>>> datname | age >>>> -------------------------+----------- >>>> blueface-crm | 441746613 >>> Note this value is 440 million, and you said in your original report that >>> >>>> autovacuum_freeze_max_age = 200000000 >>> 200 million. So this database is being selected each time because of >>> this. >>> >> Ahhh okay, I didnt know how to extract the age for a database. Learnt >> something new here. >> >>> However, what should happen is that after the vacuum the age of the >>> database is decreased after the vacuuming. What's your >>> vacuum_freeze_min_age setting? >> My *_freeze_* values are: >> >> autovacuum_freeze_max_age = 200000000 >> #vacuum_freeze_min_age = 100000000 >> >> The vacuum_freeze_min_age is the default at 100 million (I assume). > > What do you get from a SHOW vacuum_freeze_min_age? That would tell you > what's the actual value in use. Most likely it's those 100 million but > if you change it, reload, then comment it back in the file and reload > again, the value in use will be the one to which you first changed it. > Before Change ~~~~~~~~~~~~~ blueface-crm=# SHOW vacuum_freeze_min_age ; vacuum_freeze_min_age ----------------------- 100000000 (1 row) Now I edited postgresql.conf and changed vacuum_freeze_min_age to 150 million and reloaded ("pg_ctl -D `pwd` reload") Change to 150 million ~~~~~~~~~~~~~~~~~~~~~ blueface-crm=# SHOW vacuum_freeze_min_age ; vacuum_freeze_min_age ----------------------- 150000000 (1 row) Now I commented out the vacuum_freeze_min_age value and reloaded Commented Out ~~~~~~~~~~~~~ blueface-crm=# SHOW vacuum_freeze_min_age ; vacuum_freeze_min_age ----------------------- 150000000 (1 row) Now I changed postgresql.conf back to the original value and reloaded Back to original ~~~~~~~~~~~~~~~~ blueface-crm=# SHOW vacuum_freeze_min_age ; vacuum_freeze_min_age ----------------------- 100000000 (1 row) >> How much is the age decremented by on a vacuum run then? > > It should be decremented to the vacuum_freeze_min_age. However, I'm > running some experiments with your settings and apparently it's not > working as it should. >
Alvaro Herrera wrote: > Alvaro Herrera wrote: > >>> How much is the age decremented by on a vacuum run then? >> It should be decremented to the vacuum_freeze_min_age. However, I'm >> running some experiments with your settings and apparently it's not >> working as it should. > Okay, if it's decremented by the vacuum runs and the decrement is vacuum_freeze_min_age, how is the age incremented? Is it a reflection of the number of transactions performed on the database? > Nah, false alarm, it's working as expected for me. And I see the age of > databases being correctly decreased to the freeze min age (plus however > many transactions it took to do the vacuuming work). So I'm still at a > loss on why is it failing to advance the datfrozenxid of your database. > > Please let me have a look at this query result while connected to that > database: > > select relname, relfrozenxid from pg_class where relkind in ('r', 't'); > In the interest of safety, we used the oid value :) blueface-crm=# select oid, relfrozenxid from pg_class where relkind in ('r', 't'); oid | relfrozenxid ---------+-------------- 10762 | 3291686808 10769 | 3291686812 10767 | 3291686811 10739 | 3291686815 10737 | 3291686814 10744 | 3291686961 2830 | 3291686990 2832 | 3291686993 2834 | 3291687121 2836 | 3291686983 2838 | 3291687029 2840 | 3291687676 2842 | 3291686964 2844 | 3291687147 2846 | 3291687144 2570051 | 2947120794 2580908 | 3291686819 1950204 | 3291686821 1950243 | 3291686825 1950245 | 3291686827 1950102 | 3291686829 1950157 | 3291686836 1950168 | 3291686896 1950177 | 3291686900 1950179 | 3291686902 1950181 | 3291686904 1950183 | 3291686906 1950185 | 3291686908 1950193 | 3291686910 1950195 | 3291686912 1950198 | 3291686914 1950096 | 3291686916 1950099 | 3291686918 2449300 | 3291686920 1950128 | 3291686922 1950141 | 3291686925 1950143 | 3291686927 1950145 | 3291686933 1950025 | 3291686935 1950031 | 3291686944 1950036 | 3291686946 2410339 | 3291686948 2443096 | 3291686951 1950112 | 3291686958 10742 | 3291686960 10749 | 3291686967 1260 | 3291686963 10747 | 3291686966 10754 | 3291686970 10752 | 3291686969 10759 | 3291686973 10757 | 3291686972 10764 | 3291686809 2619 | 3291687675 1247 | 3291686978 1249 | 3291686980 1255 | 3291686982 1259 | 3291686985 1248 | 3291686987 2604 | 3291686989 2606 | 3291686992 2611 | 3291686995 2449290 | 3291686998 2449288 | 3291686997 1950022 | 3291687215 1950201 | 3291687008 2610 | 3291687010 2617 | 3291687012 2616 | 3291687014 2601 | 3291687016 2602 | 3291687018 2603 | 3291687020 2612 | 3291687022 2613 | 3291687024 2600 | 3291687026 2618 | 3291687028 2620 | 3291687031 2614 | 3291687118 2609 | 3291687120 2605 | 3291687125 2615 | 3291687127 2607 | 3291687129 2608 | 3291687133 1213 | 3291687137 1136 | 3291687139 1214 | 3291687141 2396 | 3291687143 1262 | 3291687146 1261 | 3291687149 1950028 | 3291686942 1950212 | 3291687151 1950046 | 3291687170 1950214 | 3291687153 1950057 | 3291687230 1950217 | 3291687155 1950062 | 3291687237 1950222 | 3291687158 1950076 | 3291687221 1950237 | 3291687161 1950093 | 3291687224 1950240 | 3291687163 2449510 | 3291687175 1950109 | 3291686830 1950118 | 3291687251 1950148 | 3291687165 1950152 | 3291687167 2410336 | 3291687200 1950138 | 3291686923 1950042 | 3291687169 2449508 | 3291687174 2410341 | 3291686949 2449775 | 3291687195 1950160 | 3291686848 1950049 | 3291687197 1950165 | 3291687218 2410334 | 3291687199 1950174 | 3291686898 1950079 | 3291687202 1950083 | 3291687206 1950007 | 3291687208 1950012 | 3291687210 1950015 | 3291687212 1950018 | 3291687214 1950163 | 3291687217 1950219 | 3291687156 1950234 | 3291687159 2608570 | 3291687227 2443093 | 3291687248 1950069 | 3291687220 2443098 | 3291686952 1950086 | 3291687223 2608567 | 3291687226 1950054 | 3291687229 1950060 | 3291687236 1950065 | 3291687239 1950038 | 3291687245 2443091 | 3291687247 1950114 | 3291687250 (138 rows) > You can change the relname to oid if showing the table names is > problematic for you. >
Bruce McAlister wrote: > I will run with DEBUG2 for a while and see if my output looks anything > like this :) I've been running in DEBUG2 mode for a couple days now and I can see the extra information being logged into the log file, but it looks like the autovacuum is not actually starting, it does not look anything like the output you showed me, ie, what it is supposed to look like. Here's an excerpt of our log for the last 15 - 20 minutes. Jun 28 07:56:01 bfiedb01 postgres[17003]: [ID 748848 local0.debug] [45371-1] 17003 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 07:57:01 bfiedb01 postgres[17025]: [ID 748848 local0.debug] [45371-1] 17025 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 07:58:01 bfiedb01 postgres[17047]: [ID 748848 local0.debug] [45371-1] 17047 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 07:58:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5390-1] 29224 DEBUG: checkpoint starting Jun 28 07:58:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5391-1] 29224 DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled Jun 28 07:59:01 bfiedb01 postgres[17069]: [ID 748848 local0.debug] [45371-1] 17069 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45371-1] 29121 DEBUG: forked new backend, pid=17098 socket=9 Jun 28 08:00:01 bfiedb01 postgres[17099]: [ID 748848 local0.debug] [45372-1] 17099 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45372-1] 29121 DEBUG: server process (PID 17098) exited with exit code 0 Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45373-1] 29121 DEBUG: forked new backend, pid=17100 socket=9 Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45374-1] 29121 DEBUG: server process (PID 17100) exited with exit code 0 Jun 28 08:01:01 bfiedb01 postgres[17122]: [ID 748848 local0.debug] [45375-1] 17122 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:02:01 bfiedb01 postgres[17144]: [ID 748848 local0.debug] [45375-1] 17144 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:03:01 bfiedb01 postgres[17166]: [ID 748848 local0.debug] [45375-1] 17166 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5392-1] 29224 DEBUG: checkpoint starting Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5393-1] 29224 DEBUG: recycled transaction log file "000000010000028800000072" Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5394-1] 29224 DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 1 recycled Jun 28 08:04:01 bfiedb01 postgres[17188]: [ID 748848 local0.debug] [45375-1] 17188 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45375-1] 29121 DEBUG: forked new backend, pid=17216 socket=9 Jun 28 08:05:01 bfiedb01 postgres[17217]: [ID 748848 local0.debug] [45376-1] 17217 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45376-1] 29121 DEBUG: server process (PID 17216) exited with exit code 0 Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45377-1] 29121 DEBUG: forked new backend, pid=17218 socket=9 Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45378-1] 29121 DEBUG: server process (PID 17218) exited with exit code 0 Jun 28 08:06:01 bfiedb01 postgres[17240]: [ID 748848 local0.debug] [45379-1] 17240 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:07:01 bfiedb01 postgres[17262]: [ID 748848 local0.debug] [45379-1] 17262 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:08:01 bfiedb01 postgres[17286]: [ID 748848 local0.debug] [45379-1] 17286 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:08:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5395-1] 29224 DEBUG: checkpoint starting Jun 28 08:08:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug] [5396-1] 29224 DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled Jun 28 08:09:01 bfiedb01 postgres[17308]: [ID 748848 local0.debug] [45379-1] 17308 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45379-1] 29121 DEBUG: forked new backend, pid=17337 socket=9 Jun 28 08:10:01 bfiedb01 postgres[17338]: [ID 748848 local0.debug] [45380-1] 17338 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45380-1] 29121 DEBUG: server process (PID 17337) exited with exit code 0 Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45381-1] 29121 DEBUG: forked new backend, pid=17339 socket=9 Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45382-1] 29121 DEBUG: server process (PID 17339) exited with exit code 0 Jun 28 08:11:01 bfiedb01 postgres[17362]: [ID 748848 local0.debug] [45383-1] 17362 DEBUG: autovacuum: processing database "blueface-crm" Jun 28 08:11:11 bfiedb01 postgres[29121]: [ID 748848 local0.debug] [45383-1] 29121 DEBUG: server process (PID 16521) exited with exit code 0
All the values here look OK, except one: On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote: > blueface-crm=# select oid, relfrozenxid from pg_class where relkind in > ('r', 't'); > oid | relfrozenxid > ---------+-------------- > 2570051 | 2947120794 Whatever this table is, the freeze XID isn't getting updated for some reason... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
Martijn van Oosterhout wrote: > All the values here look OK, except one: > > On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote: >> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in >> ('r', 't'); >> oid | relfrozenxid >> ---------+-------------- >> 2570051 | 2947120794 > > Whatever this table is, the freeze XID isn't getting updated for some > reason... > > Have a nice day, This looks like a temporary relation, temp4295 | 2947120794 Is there a way we can manually force these to update? Which brings me onto a possibly related question. I've noticed that in this particular database, that there are temporary tables that are created. I'm not 100% sure how/why these temporary tables are being created, but I do assume that it must be by some sort of SQL query that runs against the database. How does postgresql handle these temporary tables, i mean, if a temporary table is created by some sql query, is it up to the user performing the query to remove the temporary table afterwards, or does postgresql automagically remove the temporary table when the query completes?
Bruce McAlister wrote: > Which brings me onto a possibly related question. I've noticed that in > this particular database, that there are temporary tables that are > created. I'm not 100% sure how/why these temporary tables are being > created, but I do assume that it must be by some sort of SQL query that > runs against the database. How does postgresql handle these temporary > tables, i mean, if a temporary table is created by some sql query, is it > up to the user performing the query to remove the temporary table > afterwards, or does postgresql automagically remove the temporary table > when the query completes? That would defeat the purpose of temporary tables. You usually create them to perform queries on a data set from another query from the same session. AFAIK temporary tables are dropped when the session in which they were created terminates, or optionally on commit if specified that way (ON COMMIT DROP). -- Alban Hertroys alban@magproductions.nl magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World //
Alban Hertroys wrote: > Bruce McAlister wrote: >> Which brings me onto a possibly related question. I've noticed that in >> this particular database, that there are temporary tables that are >> created. I'm not 100% sure how/why these temporary tables are being >> created, but I do assume that it must be by some sort of SQL query that >> runs against the database. How does postgresql handle these temporary >> tables, i mean, if a temporary table is created by some sql query, is it >> up to the user performing the query to remove the temporary table >> afterwards, or does postgresql automagically remove the temporary table >> when the query completes? > > That would defeat the purpose of temporary tables. You usually create > them to perform queries on a data set from another query from the same > session. > I just want to verify that I understand you correctly here, do you mean that the temporary table is created by specific sql, for example, create temp table, then perform some actions on that temp table, then, either you remove the temp table, or, if you close the session/connection the postmaster will clean up the temp table? What happens if you're using connection pools, i mean are those sessions deemed "closed" after the queries complete, when the pool connections are persistent. Excuse my PGSQL ignorance, I'm new to PostgreSQL, and waiting for these PGSQL books to become available: http://www.network-theory.co.uk/newtitles.html So at the moment I'm working on principles of databases as apposed to actual intimate knowledge of PGSQL itself. > AFAIK temporary tables are dropped when the session in which they were > created terminates, or optionally on commit if specified that way (ON > COMMIT DROP). >
Bruce McAlister wrote: > Excuse my PGSQL ignorance, I'm new to PostgreSQL, and waiting for these > PGSQL books to become available: > > http://www.network-theory.co.uk/newtitles.html I'm pretty sure you'll find those are just bound copies of http://www.postgresql.org/docs/8.2/interactive/index.html Those are the only docs/books that have been produced by the PostgreSQL Global Development Group. Regards, Dave
On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote: > I just want to verify that I understand you correctly here, do you mean > that the temporary table is created by specific sql, for example, create > temp table, then perform some actions on that temp table, then, either > you remove the temp table, or, if you close the session/connection the > postmaster will clean up the temp table? What happens if you're using > connection pools, i mean are those sessions deemed "closed" after the > queries complete, when the pool connections are persistent. Yes, the temp table is private to the session and will be removed once the session closes, if not sooner. As for connection pools, IIRC there is a RESET SESSION command which should also get rid of the temporary tables. That's what's wierd about your case, I can beleive that autovacuum ignores temporary tables. And somehow you've got a temporary table that's been alive for hundreds of millions of transactions... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
Bruce McAlister wrote: > Martijn van Oosterhout wrote: > > All the values here look OK, except one: > > > > On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote: > >> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in > >> ('r', 't'); > >> oid | relfrozenxid > >> ---------+-------------- > >> 2570051 | 2947120794 > > > > Whatever this table is, the freeze XID isn't getting updated for some > > reason... Doh. > This looks like a temporary relation, > > temp4295 | 2947120794 > > Is there a way we can manually force these to update? No. Only the session that created the temp table can vacuum it. Autovacuum skips temp tables. I guess the only thing you can do here is close that session. I'm thinking that maybe should make vac_update_datfrozenxid ignore temp tables. But this doesn't really work, because if we were to truncate pg_clog there would be tuples on the temp table marked with XIDs that are nowhere to be found. Maybe we could make some noise about it though. This is a problem only in recent releases (8.2) because we started allowing the max freeze age be configurable. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Alvaro Herrera wrote: > Bruce McAlister wrote: >> Martijn van Oosterhout wrote: >>> All the values here look OK, except one: >>> >>> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote: >>>> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in >>>> ('r', 't'); >>>> oid | relfrozenxid >>>> ---------+-------------- >>>> 2570051 | 2947120794 >>> Whatever this table is, the freeze XID isn't getting updated for some >>> reason... > > Doh. > >> This looks like a temporary relation, >> >> temp4295 | 2947120794 >> >> Is there a way we can manually force these to update? > > No. Only the session that created the temp table can vacuum it. > Autovacuum skips temp tables. I guess the only thing you can do here is > close that session. > How could I go about finding out which session created the temporary table? So this could be a potential issue for autovacuum then. If, for example, our environment uses connection pooling. Then these connections are persistent to the database? From Martjin's comments, I would assume then that the connection pooling driver (ODBC/NPGSQL etc etc) should implement the "RESET SESSION" DDL after each transaction/query so that we don't have these temporary tables lying about indefinately? > I'm thinking that maybe should make vac_update_datfrozenxid ignore temp > tables. But this doesn't really work, because if we were to truncate > pg_clog there would be tuples on the temp table marked with XIDs that > are nowhere to be found. Maybe we could make some noise about it > though. > > This is a problem only in recent releases (8.2) because we started > allowing the max freeze age be configurable. > I think the max/min freeze age parameters we are using here are the default ones, I think I just uncommented them.
Alvaro Herrera wrote: > > Bruce McAlister wrote: > > >> >> Alvaro Herrera wrote: >> >> >>> >>> Bruce McAlister wrote: >>> >>> >>>> >>>> Martijn van Oosterhout wrote: >>>> >>>> >>>>> >>>>> All the values here look OK, except one: >>>>> >>>>> >>>>> >>>>> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote: >>>>> >>>>> >>>>>> >>>>>> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in >>>>>> >>>>>> ('r', 't'); >>>>>> >>>>>> oid | relfrozenxid >>>>>> >>>>>> ---------+-------------- >>>>>> >>>>>> 2570051 | 2947120794 >>>>>> >>>>>> >>>>> >>>>> Whatever this table is, the freeze XID isn't getting updated for some >>>>> >>>>> reason... >>>>> >>>>> >>> >>> Doh. >>> >>> >>> >>> >>>> >>>> This looks like a temporary relation, >>>> >>>> >>>> >>>> temp4295 | 2947120794 >>>> >>>> >>>> >>>> Is there a way we can manually force these to update? >>>> >>>> >>> >>> No. Only the session that created the temp table can vacuum it. >>> >>> Autovacuum skips temp tables. I guess the only thing you can do here is >>> >>> close that session. >>> >>> >> >> How could I go about finding out which session created the temporary table? >> >> > > > > Do this: > > > > select relname, nspname > > from pg_class join pg_namespace on (relnamespace = pg_namespace.oid) > > where pg_is_other_temp_schema(relnamespace); > > > > It returns something like > > > > relname | nspname > > ---------+----------- > > foo | pg_temp_1 > > (1 fila) > > > > So it is session with ID 1. You can then find out the PID with > > > > alvherre=# select pg_stat_get_backend_pid(1); > > pg_stat_get_backend_pid > > ------------------------- > > 13416 > > (1 fila) > > > > > > okidoki, I tried this: blueface-crm=# select relname, nspname from pg_class join pg_namespace on (relnamespace = pg_namespace.oid) where pg_is_other_temp_schema(relnamespace); relname | nspname ----------+------------ temp4295 | pg_temp_63 (1 row) blueface-crm=# select pg_stat_get_backend_pid(63); pg_stat_get_backend_pid ------------------------- 6661 (1 row) blueface-crm=# select datname, client_addr, client_port from pg_stat_activity where procpid = '6661'; datname | client_addr | client_port ----------------+-------------+------------- whitelabel-ibb | 10.6.0.181 | 1587 (1 row) Is that correct? If it is then I'm really confused, how can a connection to the whitelabel-ibb database create temporary tables in the blueface-crm database? >> >> So this could be a potential issue for autovacuum then. If, for example, >> >> our environment uses connection pooling. Then these connections are >> >> persistent to the database? From Martjin's comments, I would assume then >> >> that the connection pooling driver (ODBC/NPGSQL etc etc) should >> >> implement the "RESET SESSION" DDL after each transaction/query so that >> >> we don't have these temporary tables lying about indefinately? >> >> > > > > Right -- but sadly RESET SESSION (actually called DISCARD because RESET > > was getting too overcrowded) is only available in 8.3. > > > > But you are right, temp tables and connection pools are a rather serious > > issue, it seems. > > > > Is there a way we can actually work around this in the 8.2 release? Is there a way or a command that we could run manually over the connection that would cleanup the session environment? Is it possible to do it programatically? >>> >>> I'm thinking that maybe should make vac_update_datfrozenxid ignore temp >>> >>> tables. But this doesn't really work, because if we were to truncate >>> >>> pg_clog there would be tuples on the temp table marked with XIDs that >>> >>> are nowhere to be found. Maybe we could make some noise about it >>> >>> though. >>> >>> >>> >>> This is a problem only in recent releases (8.2) because we started >>> >>> allowing the max freeze age be configurable. >>> >>> >> >> I think the max/min freeze age parameters we are using here are the >> >> default ones, I think I just uncommented them. >> >> > > > > Humm, I would like to think that the default freeze age is 2 billion ... > > [checks the code] but no, you are right, the default is 200 million. > > > > Is 2 billion a better value to set it to?
Bruce, please make sure to keep the list copied on replies. I think there is an important bug here and I don't want it to get lost just because I lose track of it. I'm also crossposting to pgsql-hackers. Bruce McAlister wrote: > okidoki, I tried this: > > blueface-crm=# select relname, nspname from pg_class join pg_namespace > on (relnamespace = pg_namespace.oid) where > pg_is_other_temp_schema(relnamespace); > relname | nspname > ----------+------------ > temp4295 | pg_temp_63 > (1 row) > > blueface-crm=# select pg_stat_get_backend_pid(63); > pg_stat_get_backend_pid > ------------------------- > 6661 > (1 row) > > blueface-crm=# select datname, client_addr, client_port from > pg_stat_activity where procpid = '6661'; > datname | client_addr | client_port > ----------------+-------------+------------- > whitelabel-ibb | 10.6.0.181 | 1587 > (1 row) > > Is that correct? If it is then I'm really confused, how can a connection > to the whitelabel-ibb database create temporary tables in the > blueface-crm database? Well, it certainly seems like this shouldn't be happening. Maybe the table belonged to a session that crashed, but the pg_class entry has not been cleaned up -- possibly because that backend has not connected to that particular database. Maybe autovacuum itself could do something about cleaning up this kind of stuff on sight (--> dropping temp tables belonging to sessions that crash). I'm not sure though. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Alvaro Herrera <alvherre@commandprompt.com> writes: > Well, it certainly seems like this shouldn't be happening. Maybe the > table belonged to a session that crashed, but the pg_class entry has not > been cleaned up -- possibly because that backend has not connected to > that particular database. Hm --- a crash would mean that the temp table would remain until some other session (a) connected to the same database (b) using the same BackendId (sinval slot number), and (c) decided to create some temp tables of its own. So indeed it's not implausible that the table could hang around for a long time, especially if you were unlucky enough that the original creator had been using a very high BackendId slot. (Which pg_temp schema is this table attached to, anyway?) > Maybe autovacuum itself could do something about cleaning up this kind > of stuff on sight (--> dropping temp tables belonging to sessions that > crash). I'm not sure though. Yeah, we had better investigate some way to clean them up. It was never obvious before that it mattered to get rid of orphan temp tables, but I guess it does. Another possibility is just to ignore temp tables while computing datvacuumxid. A temp table that survives for > 2G transactions is going to be trouble, but I'm not sure there's anything we can usefully do about it anyway --- certainly autovacuum has no power to fix it. regards, tom lane
Tom Lane wrote: > Alvaro Herrera <alvherre@commandprompt.com> writes: > > Well, it certainly seems like this shouldn't be happening. Maybe the > > table belonged to a session that crashed, but the pg_class entry has not > > been cleaned up -- possibly because that backend has not connected to > > that particular database. > > Hm --- a crash would mean that the temp table would remain until some > other session (a) connected to the same database (b) using the same > BackendId (sinval slot number), and (c) decided to create some temp > tables of its own. So indeed it's not implausible that the table could > hang around for a long time, especially if you were unlucky enough that > the original creator had been using a very high BackendId slot. (Which > pg_temp schema is this table attached to, anyway?) It's pg_temp_63. Backend 63 is running in another database. It seems perfectly possible that a backend connects to database A, creates a temp table, crashes, then connects to database B after restart and then keeps running there forever :-( > > Maybe autovacuum itself could do something about cleaning up this kind > > of stuff on sight (--> dropping temp tables belonging to sessions that > > crash). I'm not sure though. > > Yeah, we had better investigate some way to clean them up. It was never > obvious before that it mattered to get rid of orphan temp tables, but I > guess it does. Would it be enough to delete the tuple from pg_class? I guess that will leave behind the tuples in pg_attribute etc, but I don't see another way to drop it ... Maybe UPDATE to move it to the local temp schema and then DROP it? Or maybe it works to do DROP TABLE pg_temp_63.temp2394 as superuser ...? I haven't tried. > Another possibility is just to ignore temp tables while computing > datvacuumxid. A temp table that survives for > 2G transactions is going > to be trouble, but I'm not sure there's anything we can usefully do > about it anyway --- certainly autovacuum has no power to fix it. Yes, I was going to suggest that, though it doesn't seem right. -- Alvaro Herrera Valdivia, Chile ICBM: S 39º 49' 18.1", W 73º 13' 56.4" "Tiene valor aquel que admite que es un cobarde" (Fernandel)
Alvaro Herrera <alvherre@commandprompt.com> writes: > Tom Lane wrote: >> Yeah, we had better investigate some way to clean them up. It was never >> obvious before that it mattered to get rid of orphan temp tables, but I >> guess it does. > Would it be enough to delete the tuple from pg_class? No, you need a full DROP. I don't see that that's harder than removing only the pg_class tuple --- the problem in either case is to be sure it's OK. In particular, how to avoid a race condition against an incoming backend that adopts that BackendId? Worst-case, you could be deleting a temp table he just made. regards, tom lane
Tom Lane wrote: > Alvaro Herrera <alvherre@commandprompt.com> writes: > > Tom Lane wrote: > >> Yeah, we had better investigate some way to clean them up. It was never > >> obvious before that it mattered to get rid of orphan temp tables, but I > >> guess it does. > > > Would it be enough to delete the tuple from pg_class? > > No, you need a full DROP. I don't see that that's harder than removing > only the pg_class tuple --- the problem in either case is to be sure > it's OK. In particular, how to avoid a race condition against an > incoming backend that adopts that BackendId? Worst-case, you could be > deleting a temp table he just made. Oh, I was just thinking in way for Bruce to get out of his current situation. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Alvaro Herrera <alvherre@commandprompt.com> writes: > Oh, I was just thinking in way for Bruce to get out of his current > situation. Oh, for that a manual "drop table" as superuser should work fine. regards, tom lane
Martijn van Oosterhout ha scritto:
I tried to issue the command in a 8.1 server and the answer was: ERROR: unrecognized configuration parameter "session"
Is there available a patch for the 8.1 version of postgresql?
Thank you,
Denis
RESET SESSION command is available only in 8.2 branch, isn't it?On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote:I just want to verify that I understand you correctly here, do you mean that the temporary table is created by specific sql, for example, create temp table, then perform some actions on that temp table, then, either you remove the temp table, or, if you close the session/connection the postmaster will clean up the temp table? What happens if you're using connection pools, i mean are those sessions deemed "closed" after the queries complete, when the pool connections are persistent.Yes, the temp table is private to the session and will be removed once the session closes, if not sooner. As for connection pools, IIRC there is a RESET SESSION command which should also get rid of the temporary tables.
I tried to issue the command in a 8.1 server and the answer was: ERROR: unrecognized configuration parameter "session"
Is there available a patch for the 8.1 version of postgresql?
Thank you,
Denis
Denis Gasparin wrote: > Martijn van Oosterhout ha scritto: >> On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote: >> >>> I just want to verify that I understand you correctly here, do you mean >>> that the temporary table is created by specific sql, for example, create >>> temp table, then perform some actions on that temp table, then, either >>> you remove the temp table, or, if you close the session/connection the >>> postmaster will clean up the temp table? What happens if you're using >>> connection pools, i mean are those sessions deemed "closed" after the >>> queries complete, when the pool connections are persistent. >>> >> >> Yes, the temp table is private to the session and will be removed once >> the session closes, if not sooner. As for connection pools, IIRC there >> is a RESET SESSION command which should also get rid of the temporary >> tables. >> > RESET SESSION command is available only in 8.2 branch, isn't it? > I tried to issue the command in a 8.1 server and the answer was: ERROR: > unrecognized configuration parameter "session" > I had a look in our configuration and there is a "session" option that comes up if you type "RESET " and press TAB twice. However, if you do try to run it, it comes back with: blueface-crm=# RESET session ; ERROR: unrecognized configuration parameter "session" We're running PostgreSQL 8.2.4. Alvaro said it was only going to be available in 8.3. I hope they backport it to 8.2 though, 90% of our connections are via connection pooled drivers, so being able to reset the session after each transaction would be a handy way to ensure the environment is sane for the next transaction :) > Is there available a patch for the 8.1 version of postgresql? > > Thank you, > Denis
Bruce McAlister wrote: > Denis Gasparin wrote: > > RESET SESSION command is available only in 8.2 branch, isn't it? > > I tried to issue the command in a 8.1 server and the answer was: ERROR: > > unrecognized configuration parameter "session" > > I had a look in our configuration and there is a "session" option that > comes up if you type "RESET " and press TAB twice. However, if you do > try to run it, it comes back with: > > blueface-crm=# RESET session ; > ERROR: unrecognized configuration parameter "session" If you continue tab-completing you'll notice that it's actually "reset session authorization" which is a different animal completely. > We're running PostgreSQL 8.2.4. Alvaro said it was only going to be > available in 8.3. I hope they backport it to 8.2 though, No way. > 90% of our connections are via connection pooled drivers, so being > able to reset the session after each transaction would be a handy way > to ensure the environment is sane for the next transaction :) Seems you'll be one of the early adopters of 8.3 solely for this reason ;-) -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Is this item closed? --------------------------------------------------------------------------- Alvaro Herrera wrote: > Tom Lane wrote: > > Alvaro Herrera <alvherre@commandprompt.com> writes: > > > Tom Lane wrote: > > >> Yeah, we had better investigate some way to clean them up. It was never > > >> obvious before that it mattered to get rid of orphan temp tables, but I > > >> guess it does. > > > > > Would it be enough to delete the tuple from pg_class? > > > > No, you need a full DROP. I don't see that that's harder than removing > > only the pg_class tuple --- the problem in either case is to be sure > > it's OK. In particular, how to avoid a race condition against an > > incoming backend that adopts that BackendId? Worst-case, you could be > > deleting a temp table he just made. > > Oh, I was just thinking in way for Bruce to get out of his current > situation. > > -- > Alvaro Herrera http://www.CommandPrompt.com/ > The PostgreSQL Company - Command Prompt, Inc. > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote: > > Is this item closed? No, it isn't. Please add a TODO item about it: * Prevent long-lived temp tables from causing frozen-Xid advancement starvation -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Alvaro Herrera wrote: > Bruce Momjian wrote: > > > > Is this item closed? > > No, it isn't. Please add a TODO item about it: > * Prevent long-lived temp tables from causing frozen-Xid advancement > starvation Sorry, I don't understand this. Can you give me more text? Thanks. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On 9/13/07, Bruce Momjian <bruce@momjian.us> wrote:
s/long-lived/orphaned/ ? And possibly this means better orphan detection and removal.
Andrew
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> >
> > Is this item closed?
>
> No, it isn't. Please add a TODO item about it:
> * Prevent long-lived temp tables from causing frozen-Xid advancement
> starvation
Sorry, I don't understand this. Can you give me more text? Thanks.
s/long-lived/orphaned/ ? And possibly this means better orphan detection and removal.
Andrew
Andrew Hammond wrote: > On 9/13/07, Bruce Momjian <bruce@momjian.us> wrote: > > > > Alvaro Herrera wrote: > > > Bruce Momjian wrote: > > > > > > > > Is this item closed? > > > > > > No, it isn't. Please add a TODO item about it: > > > * Prevent long-lived temp tables from causing frozen-Xid advancement > > > starvation > > > > Sorry, I don't understand this. Can you give me more text? Thanks. > > > > s/long-lived/orphaned/ ? And possibly this means better orphan detection and > removal. Added: o Prevent long-lived temporary tables from causing frozen-xid advancement starvation http://archives.postgresql.org/pgsql-general/2007-06/msg01645.php -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Alvaro Herrera wrote: > Bruce Momjian wrote: > > > > Is this item closed? > > No, it isn't. Please add a TODO item about it: > * Prevent long-lived temp tables from causing frozen-Xid advancement > starvation Thanks. Added to TODO. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote:
No, it isn't. Please add a TODO item about it:* Prevent long-lived temp tables from causing frozen-Xid advancement starvation
Can somebody explain this one to me? because of our auditing technique, we have many LONG lived temp tables.....(one per pooled connection)...so as long as the pool isn't disturbed, these temp tables can exist for a long time (weeks....months?)
(previous thread about our use of temp tables and autovacuum/xid issues)
http://archives.postgresql.org/pgsql-general/2007-01/msg00690.php
http://archives.postgresql.org/pgsql-general/2007-01/msg00691.php
Jeff Amiel wrote: > > Bruce Momjian wrote: >>> >>> No, it isn't. Please add a TODO item about it: >>> * Prevent long-lived temp tables from causing frozen-Xid advancement >>> starvation > > Can somebody explain this one to me? because of our auditing technique, we > have many LONG lived temp tables.....(one per pooled connection)...so as > long as the pool isn't disturbed, these temp tables can exist for a long > time (weeks....months?) Hmm. The problem is that the system can't advance the frozen Xid for a database when there are temp tables that live for long periods of time. Autovacuum can't vacuum those tables; if the app vacuums them itself then there's no problem, but you can only vacuum them in the same session that creates it. The problem with a frozen Xid (datfrozenxid) that doesn't advance is of Xid-wraparound nature. The system eventually shuts itself down to prevent data loss, so if those temp tables live a really long life, you could be subject to that. (The immediate symptom is that pg_clog segments do not get recycled, which is not serious because it's just wasted disk space, and it's not a lot). > (previous thread about our use of temp tables and autovacuum/xid issues) > http://archives.postgresql.org/pgsql-general/2007-01/msg00690.php > http://archives.postgresql.org/pgsql-general/2007-01/msg00691.php Sorry, I'm offline ATM and can't check those. -- Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J "Cuando miro a alguien, más me atrae cómo cambia que quién es" (J. Binoche)
Alvaro Herrera <alvherre@commandprompt.com> writes: >>>> No, it isn't. Please add a TODO item about it: >>>> * Prevent long-lived temp tables from causing frozen-Xid advancement >>>> starvation >> > Jeff Amiel wrote: >> Can somebody explain this one to me? because of our auditing technique, we >> have many LONG lived temp tables.....(one per pooled connection)...so as >> long as the pool isn't disturbed, these temp tables can exist for a long >> time (weeks....months?) > Hmm. The problem is that the system can't advance the frozen Xid for a > database when there are temp tables that live for long periods of time. > Autovacuum can't vacuum those tables; if the app vacuums them itself > then there's no problem, but you can only vacuum them in the same > session that creates it. I'm not convinced there's a huge problem here. Surely Jeff's app is going to either vacuum or truncate those temp tables occasionally; otherwise they'll bloat to the point of uselessness. Either action will fix the problem. The real issue is that the app has to remember to do that. Perhaps a better TODO item would be * Find a way to autovacuum temp tables though I admit I have no clue how to do that without giving up most of the performance advantages of temp tables. regards, tom lane
Tom Lane wrote: > Alvaro Herrera <alvherre@commandprompt.com> writes: > >>>> No, it isn't. Please add a TODO item about it: > >>>> * Prevent long-lived temp tables from causing frozen-Xid advancement > >>>> starvation > >> > > Jeff Amiel wrote: > >> Can somebody explain this one to me? because of our auditing technique, we > >> have many LONG lived temp tables.....(one per pooled connection)...so as > >> long as the pool isn't disturbed, these temp tables can exist for a long > >> time (weeks....months?) > > > Hmm. The problem is that the system can't advance the frozen Xid for a > > database when there are temp tables that live for long periods of time. > > Autovacuum can't vacuum those tables; if the app vacuums them itself > > then there's no problem, but you can only vacuum them in the same > > session that creates it. > > I'm not convinced there's a huge problem here. Surely Jeff's app is > going to either vacuum or truncate those temp tables occasionally; > otherwise they'll bloat to the point of uselessness. Either action > will fix the problem. > > The real issue is that the app has to remember to do that. Perhaps > a better TODO item would be > * Find a way to autovacuum temp tables > though I admit I have no clue how to do that without giving up most > of the performance advantages of temp tables. TODO updated: * Prevent long-lived temporary tables from causing frozen-xid advancement starvation The problem is that autovacuum cannot vacuum them to set frozen xids; only the session that created them can do that. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. +