Thread: Temporary tables under hot standby
A key barrier to migrations from trigger-based replication to WAL-based replication is the lack of temporary tables under hot standby. I'd like to close that gap; the changes needed will also reduce the master-side cost of temporary table usage. Here is a high-level design for your advice and comments. Much of this builds on ideas from past postings noted below. Our temporary tables are cataloged and filled like permanent tables. This has the major advantage of making most code operate on tables with minimal regard for their relpersistence. It also brings disadvantages: 1. High catalog turnover in rapid create/drop workloads. Heavy temporary table users often need to REINDEX relation-orientedcatalogs. Hot standby cannot assign OIDs or modify system catalogs at all. 2. Consumption of permanent XIDs for DML on the table. This increases COMMIT cost on the master and is a non-starter underhot standby. 3. autovacuum cannot VACUUM temporary tables, but their relfrozenxid values delay pg_clog truncation and can trigger a wraparound-preventionshutdown. 4. sinval traffic from every CREATE TEMP TABLE et al. 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. 6. We don't automatically drop temporary tables that existed at the point of a crash, because they look much like permanenttables. To resolve points 2 and 3, let's change the XID values stored in temporary tables from regular TransactionId to LocalTransactionId. This entails an lxid counterpart for clog.c. Functions like GetCurrentTransactionId() and HeapTupleSatisfiesVisibility() will take a Relation argument to identify the XID type. One open question is whether to add conditional logic to functions like HeapTupleSatisfiesMVCC() or to have parallel implementations like HeapTupleSatisfiesMVCCLocal(). I lean toward the latter, perhaps with the help of some code generation. I don't think a counterpart for pg_subtrans will be necessary; the backend knows its own XID tree, and the TRANSACTION_STATUS_SUB_COMMITTED interlock is superfluous with only one backend as reader and writer. I'm also thinking the local clog can live strictly in memory; a session that retains a temporary table across 2B local transactions can afford 512 MiB of RAM. With this change, VACUUM can ignore relfrozenxid of temporary tables when calculating a new datfrozenxid. This change can form an independent patch. I do not see a clean behind-the-scenes fix for points 1, 4 and 5. We can resolve those by adding a new variety of temporary table, one coincidentally matching the SQL standard's notion of a temporary table. The developer will declare it once, after which all sessions observe it as an initially-empty table whose contents remain local to the session. Most relation catalog entries, including all OIDs, are readily sharable among sessions. The exceptions are relpages, reltuples, relallvisible, relfrozenxid, and pg_statistic rows. I will handle the pg_class columns by introducing new backend APIs abstracting them. Those APIs will consult the relcache for permanent tables and a local-memory hash for temporary tables. For statistics, add a new catalog pg_temp_statistic, an inheritance child of pg_statistic and itself one of these new-variety temporary tables. Past discussions have raised the issue of interaction between commands like ALTER TABLE and sessions using the new-variety temporary table. As a first cut, let's keep this simple and have ongoing use of the table block operations requiring AccessExclusiveLock. Note that you can always just make a new temporary table with a different name to deploy a change quickly. Implement this with a heavyweight lock having a novel life cycle. When a session first takes an ordinary relation lock on the table, acquire the longer-term lock and schedule it for release on transaction abort. On TRUNCATE, schedule a release on transaction commit. Of course, also release the lock at session end. For new-variety temporary tables, change file names from "relfilenode[_fork]" to "refilenode[_fork].pid.localnode". During crash recovery, delete all files conforming to that pattern for refilenodes of known temporary tables. This also lets backends share pg_class.relfilenode. The "localnode" field would distinguish multiple generations of a table across VACUUM FULL, CLUSTER, and TRUNCATE. We could build on this strategy to safely resolve point 6 for the existing kind of temporary table, but that entails enough other details to probably not mix it into the same patch. A third patch will permit the following commands in read-only transactions, where they will throw an error if the subject is not a temporary table: INSERT UPDATE DELETE COPY ... FROM TRUNCATE ANALYZE VACUUM (including VACUUM FULL) CLUSTER (without USING clause) REINDEX I considered whether to instead separate the set of commands allowed in a read-only transaction from the set allowed under hot standby. This proposal is closer to the SQL standard, which explicitly allows INSERT, UPDATE and DELETE on temporary tables during read-only transactions. Only the first five commands are essential; support for the rest could wait for follow-on patches. Concerning everyone's favorite topic, how to name the new type of table, I liked Tom's proposal[1] to make CREATE TEMP TABLE retain current behavior and have CREATE GLOBAL TEMP TABLE and/or CREATE LOCAL TEMP TABLE request the new SQL-standard variety. (I'd vote for using CREATE GLOBAL and retaining CREATE LOCAL for future expansion.) As he mentions, to get there, we'd ideally start by producing a warning instead of silently accepting GLOBAL as a noise word. Should we put such a warning into 9.2? How important is support for VACUUM on these tables under hot standby? The alternative is to fail when a session retains a temporary table across 2B local transactions. I do not currently see any challenges sufficient to motivate not supporting VACUUM, but it might be a useful simplification to keep in mind. What about ANALYZE support; how important is the ability to collect statistics on temporary tables? Again, I tentatively expect to support it regardless of the answer. Key past threads: http://archives.postgresql.org/message-id/7903.1050417344@sss.pgh.pa.us http://archives.postgresql.org/message-id/162867790707011431u71e53543x19e64e5bb160b124@mail.gmail.com http://archives.postgresql.org/message-id/162867790904271344s1ec96d90j6cde295fdcc7806f@mail.gmail.com http://archives.postgresql.org/message-id/u2o603c8f071004231952i36642ae6u9d6a7eae6eb6ff32@mail.gmail.com http://archives.postgresql.org/message-id/BANLkTin1Gha0SS77E64jczPfAPn6Oxb8hQ%40mail.gmail.com Thanks, nm [1] http://archives.postgresql.org/message-id/5422.1240936705@sss.pgh.pa.us
On Wed, Apr 25, 2012 at 4:55 AM, Noah Misch <noah@leadboat.com> wrote: > 1. High catalog turnover in rapid create/drop workloads. Heavy temporary > table users often need to REINDEX relation-oriented catalogs. Hot standby > cannot assign OIDs or modify system catalogs at all. > 4. sinval traffic from every CREATE TEMP TABLE et al. > 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. > I do not see a clean behind-the-scenes fix for points 1, 4 and 5. We can > resolve those by adding a new variety of temporary table, one coincidentally > matching the SQL standard's notion of a temporary table. The developer will > declare it once, after which all sessions observe it as an initially-empty > table whose contents remain local to the session. The rest of your post is very good and I'm excited. This part doesn't work for me. I don't see how introducing a new type of temp table solves this problem. How would the developer declare this in HS? How would it then be globally visible without using global OIDs, causing sinval and using global locks? This feels like a suggestion from somewhere else grafted onto your proposal. I'm not against introducing a new type of temp table, I just think it is orthogonal to the OT. For me, the solutions are 4. Invent a "linval" - an invalidation that always stays local 5. invent a LocalRelationLock that takes AccessExclusiveLock but never goes to the global lock table. So those aspects stay completely in local memory. 1. is a little harder. I suggest we have a range of say 16384 OIDs reserved for use by temporary relations. If that range is used up we do then move to using real global Ids, though that is code we'll almost never need, so it would be acceptable to restrict HS sessions to only ever use 16384 temp tables concurrently in one session. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Apr 25, 2012 at 4:55 AM, Noah Misch <noah@leadboat.com> wrote: > A third patch will permit the following commands in read-only transactions, > where they will throw an error if the subject is not a temporary table: ... > VACUUM (including VACUUM FULL) > CLUSTER (without USING clause) > REINDEX Those commands have very low user visible effect on temp tables, so I suggest we implement them as no-op commands in HS. When everything else is done and dusted, they might be worth returning to, but I'd hope you won't spend much time on those anytime soon. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
2012/4/25 Simon Riggs <simon@2ndquadrant.com>: > On Wed, Apr 25, 2012 at 4:55 AM, Noah Misch <noah@leadboat.com> wrote: > >> I do not see a clean behind-the-scenes fix for points 1, 4 and 5. We can >> resolve those by adding a new variety of temporary table, one coincidentally >> matching the SQL standard's notion of a temporary table. The developer will >> declare it once, after which all sessions observe it as an initially-empty >> table whose contents remain local to the session. [..] > I don't see how introducing a new type of temp table solves this > problem. How would the developer declare this in HS? How would it then > be globally visible without using global OIDs, causing sinval and > using global locks? The declarative creation of an “standard-like” temporary table only happens once (it is part of the schema). Using (e.g. putting stuff in and executing queries on) such tables can happen on the standby without the master having to know. Therefore, I don't see the problem. Just schedule issuing the creation along with any other schema-changes on the master. Nicolas -- A. Because it breaks the logical sequence of discussion. Q. Why is top posting bad?
On Wed, Apr 25, 2012 at 9:37 AM, Nicolas Barbier <nicolas.barbier@gmail.com> wrote: > The declarative creation of an “standard-like” temporary table only > happens once (it is part of the schema). Using (e.g. putting stuff in > and executing queries on) such tables can happen on the standby > without the master having to know. So you are saying it is OK to not be able to *create* them on HS, just *use* pre-defined tables? That's almost useless IMHO. Applications expect to be able to do this all in the same transaction on one session CREATE TEMP TABLE x;...DML commands... SELECT ... FROM x; -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
2012/4/25 Simon Riggs <simon@2ndquadrant.com>: > So you are saying it is OK to not be able to *create* them on HS, just > *use* pre-defined tables? > > That's almost useless IMHO. > > Applications expect to be able to do this all in the same transaction > on one session > CREATE TEMP TABLE x; > ...DML commands... > SELECT ... FROM x; That’s not how standard-like temporary tables work, they are supposed to be declared beforehand. That makes sense if you consider the schema and the set of database-using applications as one. I assume that wanting to define applications independently from the database schema is the reason of existence for the PG-like temporary transactions. The way standard-like temporary tables work is exactly why I assume Noah proposes to implement them: because they work nicely with HS. Nicolas -- A. Because it breaks the logical sequence of discussion. Q. Why is top posting bad?
2012/4/25 Nicolas Barbier <nicolas.barbier@gmail.com>: > is the reason of existence for the PG-like temporary transactions. s/transactions/tables/ Nicolas -- A. Because it breaks the logical sequence of discussion. Q. Why is top posting bad?
On Wed, Apr 25, 2012 at 11:08 AM, Nicolas Barbier <nicolas.barbier@gmail.com> wrote: > 2012/4/25 Simon Riggs <simon@2ndquadrant.com>: > >> So you are saying it is OK to not be able to *create* them on HS, just >> *use* pre-defined tables? >> >> That's almost useless IMHO. >> >> Applications expect to be able to do this all in the same transaction >> on one session >> CREATE TEMP TABLE x; >> ...DML commands... >> SELECT ... FROM x; > > That’s not how standard-like temporary tables work, they are supposed > to be declared beforehand. That makes sense if you consider the schema > and the set of database-using applications as one. I assume that > wanting to define applications independently from the database schema > is the reason of existence for the PG-like temporary transactions. > > The way standard-like temporary tables work is exactly why I assume > Noah proposes to implement them: because they work nicely with HS. Well, following a standard that no other major DBMS has followed is not great, especially if it leads to a non-useful feature. Many software products generate CREATE TEMP TABLE statements dynamically. This design would prevent ALL of them from working, as well as preventing all current programs from using temp tables in the currently accepted way, so the whole concept is very regrettably flawed. I very much support Noah's work to "make temp tables work on hot standby", but we must solve the main problem, not just implement "make a completely new kind of temp table work on hot standby". I have no objection to "make a new kind of temp table", but that does not solve the "make temp tables work on hot standby" problem. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Tue, Apr 24, 2012 at 11:55 PM, Noah Misch <noah@leadboat.com> wrote: > A key barrier to migrations from trigger-based replication to WAL-based > replication is the lack of temporary tables under hot standby. I'd like to > close that gap; the changes needed will also reduce the master-side cost of > temporary table usage. Here is a high-level design for your advice and > comments. Much of this builds on ideas from past postings noted below. > > Our temporary tables are cataloged and filled like permanent tables. This has > the major advantage of making most code operate on tables with minimal regard > for their relpersistence. It also brings disadvantages: > > 1. High catalog turnover in rapid create/drop workloads. Heavy temporary > table users often need to REINDEX relation-oriented catalogs. Hot standby > cannot assign OIDs or modify system catalogs at all. > 2. Consumption of permanent XIDs for DML on the table. This increases COMMIT > cost on the master and is a non-starter under hot standby. > 3. autovacuum cannot VACUUM temporary tables, but their relfrozenxid values > delay pg_clog truncation and can trigger a wraparound-prevention shutdown. > 4. sinval traffic from every CREATE TEMP TABLE et al. > 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. > 6. We don't automatically drop temporary tables that existed at the point of a > crash, because they look much like permanent tables. #6 is already fixed in 9.1. Temporary tables now have names like tBACKENDID_RELFILENODE, and are cleaned up during start-up in exactly the same way you're propose to do it further down. > To resolve points 2 and 3, let's change the XID values stored in temporary > tables from regular TransactionId to LocalTransactionId. This entails an lxid > counterpart for clog.c. Functions like GetCurrentTransactionId() and > HeapTupleSatisfiesVisibility() will take a Relation argument to identify the > XID type. One open question is whether to add conditional logic to functions > like HeapTupleSatisfiesMVCC() or to have parallel implementations like > HeapTupleSatisfiesMVCCLocal(). I lean toward the latter, perhaps with the > help of some code generation. I don't think a counterpart for pg_subtrans > will be necessary; the backend knows its own XID tree, and the > TRANSACTION_STATUS_SUB_COMMITTED interlock is superfluous with only one > backend as reader and writer. I'm also thinking the local clog can live > strictly in memory; a session that retains a temporary table across 2B local > transactions can afford 512 MiB of RAM. With this change, VACUUM can ignore > relfrozenxid of temporary tables when calculating a new datfrozenxid. This > change can form an independent patch. Agreed. If you can pull it off, this will be a nice improvement regardless of what happens with the rest of this, and it makes sense to do it as a separate patch. I don't yet have a strong opinion on what to do with the HeapTupleSatisfies* functions, but I suspect you're correct in thinking that separate functions are better. For one thing, those functions are very much performance-critical, so introducing extra branches is something to avoid. Maybe this is a silly idea, but if you're thinking about creating a local XID space and a global XID space, it might be a good idea to also make allowance for an "unlogged" XID space - that is, an XID space that is global to all backends but need not survive crashes. This would potentially allow unlogged tables to be used in HS mode. I would expect that you probably don't want to go as far as actually trying to make this work as part of your current project, but maybe as you're structuring the code it would be worth considering the possibility that we'll eventually want >2 XID spaces, either for this or other reasons. > I do not see a clean behind-the-scenes fix for points 1, 4 and 5. We can > resolve those by adding a new variety of temporary table, one coincidentally > matching the SQL standard's notion of a temporary table. The developer will > declare it once, after which all sessions observe it as an initially-empty > table whose contents remain local to the session. Most relation catalog > entries, including all OIDs, are readily sharable among sessions. The > exceptions are relpages, reltuples, relallvisible, relfrozenxid, and > pg_statistic rows. I will handle the pg_class columns by introducing new > backend APIs abstracting them. Those APIs will consult the relcache for > permanent tables and a local-memory hash for temporary tables. For > statistics, add a new catalog pg_temp_statistic, an inheritance child of > pg_statistic and itself one of these new-variety temporary tables. With respect to problem #5, I've been wondering if we couldn't just forget about taking AccessExclusiveLock when first creating a table (temporary or permanent). Unless and until the transaction commits, nobody can see the catalog entry anyway, so nobody else is going to attempt to take a conflicting lock. So why bother? Maybe there's some reason here that's not obvious to me. If not, it might be worth doing on general principle independent of this project. Problem #4 is a little stickier, but I think also solvable. Basically all cross-backend access to temp tables ought to be prohibited anyway, but it currently isn't, because there's at least one cross-backend operation that we categorically need to support: DROP. Separating the global and local XID spaces would help with that, but not really all that much, since if a session manages to exit without cleaning up after itself, we'd like someone to be able to do that later (and maybe more aggressively than what we do now, which is to only clean things up after 2B txns or so). Nevertheless it's currently possible for backend A to build a relcache entry for backend B's temporary relation, and it might be that if we prohibited that and maybe a few other things we could see our way clear to removing most or all of the sinval traffic here. Still, it may not really help that much without a solution to problem #1. On that note, I had a thought in the pat that it might be possible to do solve problem #1 by using global temp tables as system catalogs - that is, for each type of system catalog that relates to table creation, you'd have a permanent catalog and a global temp catalog. So if someone wants to create a temporary table of the existing variety on the standby, you can make all the entries in the global-temp version of pg_class, pg_attribute, etc. However, this seems extremely difficult to manage in general - there's a lot of code churn involved, and also possible temporary -> permanent dependencies; for example, the temporary table might have a pg_attrdef entry that needs to depend on a non-temporary pg_proc entry. That's tricky to solve on the master and even trickier to solve in HS operation. So I'm inclined to agree with you that it makes more sense to just aim to support global temp tables in HS mode, and if we want to beat our head against the brick wall of making regular temp tables work there eventually, that can be a later project. I would suggest just not worrying about the statistical stuff for the first version of the patch. Assume that it's adequate to have one set of statistics for all copies of the table, both pg_statistic entries and the stats-related stuff in pg_class (relpages, reltuples, relallvisible). It's not unreasonable to think that the table will be used in broadly similiar ways across all backends, so in some cases this might actually give better performance than what you're proposing to do. If not, you can solve that problem in a future patch. This project is hard enough without throwing that problem onto the pile, and I really think that if you do throw it on the pile you're going to be adding a lot of complexity and code churn that isn't really necessary for a v1. > Past discussions have raised the issue of interaction between commands like > ALTER TABLE and sessions using the new-variety temporary table. As a first > cut, let's keep this simple and have ongoing use of the table block operations > requiring AccessExclusiveLock. Note that you can always just make a new > temporary table with a different name to deploy a change quickly. Implement > this with a heavyweight lock having a novel life cycle. When a session first > takes an ordinary relation lock on the table, acquire the longer-term lock and > schedule it for release on transaction abort. On TRUNCATE, schedule a release > on transaction commit. Of course, also release the lock at session end. I'm not sure I believe this will work, but maybe I'm just not understanding it. > For new-variety temporary tables, change file names from "relfilenode[_fork]" > to "refilenode[_fork].pid.localnode". During crash recovery, delete all files > conforming to that pattern for refilenodes of known temporary tables. This > also lets backends share pg_class.relfilenode. The "localnode" field would > distinguish multiple generations of a table across VACUUM FULL, CLUSTER, and > TRUNCATE. We could build on this strategy to safely resolve point 6 for the > existing kind of temporary table, but that entails enough other details to > probably not mix it into the same patch. In lieu of including localnode in the filename, I would suggest that for a global temp table, we set relfilenode = 0, and let each backend assign one (and reassign new ones) whenever it feels like it. The mapping from reloid -> relfilenode can be kept in backend-local memory, or you can keep a <backend id, rel oid> -> relfilenode mapping in a separate relation fork. The latter would have the additional advantage of simplifying cleanup and might also be helpful in detecting when the table is or is not in use by multiple backends. > How important is support for VACUUM on these tables under hot standby? The > alternative is to fail when a session retains a temporary table across 2B > local transactions. I do not currently see any challenges sufficient to > motivate not supporting VACUUM, but it might be a useful simplification to > keep in mind. What about ANALYZE support; how important is the ability to > collect statistics on temporary tables? Again, I tentatively expect to > support it regardless of the answer. I think it's probably pretty important to support VACUUM, because even ignoring wraparound considerations, not vacuuming tends to cause performance to suck. I think ANALYZE is less important for the reasons stated above. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 4:49 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> How important is support for VACUUM on these tables under hot standby? The >> alternative is to fail when a session retains a temporary table across 2B >> local transactions. I do not currently see any challenges sufficient to >> motivate not supporting VACUUM, but it might be a useful simplification to >> keep in mind. What about ANALYZE support; how important is the ability to >> collect statistics on temporary tables? Again, I tentatively expect to >> support it regardless of the answer. > > I think it's probably pretty important to support VACUUM, because even > ignoring wraparound considerations, not vacuuming tends to cause > performance to suck. I think ANALYZE is less important for the > reasons stated above. ANALYZE is essential for temp tables in many cases... not sure what the "reasons stated above" were, I can't resolve that reference. I've never seen VACUUM used on a temp table. Perhaps we need it for edge cases, but either way ISTM to be low priority. If people find temp tables restrictive they can just use unlogged tables instead. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Apr 25, 2012 at 12:08 PM, Simon Riggs <simon@2ndquadrant.com> wrote: > On Wed, Apr 25, 2012 at 4:49 PM, Robert Haas <robertmhaas@gmail.com> wrote: >>> How important is support for VACUUM on these tables under hot standby? The >>> alternative is to fail when a session retains a temporary table across 2B >>> local transactions. I do not currently see any challenges sufficient to >>> motivate not supporting VACUUM, but it might be a useful simplification to >>> keep in mind. What about ANALYZE support; how important is the ability to >>> collect statistics on temporary tables? Again, I tentatively expect to >>> support it regardless of the answer. >> >> I think it's probably pretty important to support VACUUM, because even >> ignoring wraparound considerations, not vacuuming tends to cause >> performance to suck. I think ANALYZE is less important for the >> reasons stated above. > > ANALYZE is essential for temp tables in many cases... not sure what > the "reasons stated above" were, I can't resolve that reference. My theory is that users of a global temp table will have similar-enough usage patterns that a set of statistics that is good enough for one user will be good enough for all of them. That might not be true in all cases, but I think it will simplify things quite a bit to assume it true for purposes of an initial implementation. And as I noted, in some cases it might be a clear improvement: right now, after creating a temp table, you've got to analyze it or you'll just get the default statistics, which figure to be terrible. Inheriting the statistics left over from the last guy's analyze figures to be significantly superior. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 12:18 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Apr 25, 2012 at 12:08 PM, Simon Riggs <simon@2ndquadrant.com> wrote: >> On Wed, Apr 25, 2012 at 4:49 PM, Robert Haas <robertmhaas@gmail.com> wrote: >>>> How important is support for VACUUM on these tables under hot standby? The >>>> alternative is to fail when a session retains a temporary table across 2B >>>> local transactions. I do not currently see any challenges sufficient to >>>> motivate not supporting VACUUM, but it might be a useful simplification to >>>> keep in mind. What about ANALYZE support; how important is the ability to >>>> collect statistics on temporary tables? Again, I tentatively expect to >>>> support it regardless of the answer. >>> >>> I think it's probably pretty important to support VACUUM, because even >>> ignoring wraparound considerations, not vacuuming tends to cause >>> performance to suck. I think ANALYZE is less important for the >>> reasons stated above. >> >> ANALYZE is essential for temp tables in many cases... not sure what >> the "reasons stated above" were, I can't resolve that reference. > > My theory is that users of a global temp table will have > similar-enough usage patterns that a set of statistics that is good > enough for one user will be good enough for all of them. That might > not be true in all cases, but I think it will simplify things quite a > bit to assume it true for purposes of an initial implementation. And > as I noted, in some cases it might be a clear improvement: right now, > after creating a temp table, you've got to analyze it or you'll just > get the default statistics, which figure to be terrible. Inheriting > the statistics left over from the last guy's analyze figures to be > significantly superior. Oh, we're talking about different things, and I'm slightly confused. Yes, we need to support ANALYZE; what we might not need to support, at least initially, is every user of a global temp table having their own SEPARATE copy of the table statistics. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 5:19 PM, Robert Haas <robertmhaas@gmail.com> wrote: > Oh, we're talking about different things, and I'm slightly confused. > > Yes, we need to support ANALYZE; what we might not need to support, at > least initially, is every user of a global temp table having their own > SEPARATE copy of the table statistics. Yes, we are. Global Temp Tables won't solve the "Works on HS" problem, so we'd better decide fairly quickly which use case we are addressing, and why. ISTM Global Temp Tables is more an Oracle compatibility issue than a problem PostgreSQL users have. ...I have zero basis for deciding whether what you say about Global Temp Tables is useful or not. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Apr 25, 2012 at 12:30 PM, Simon Riggs <simon@2ndquadrant.com> wrote: > On Wed, Apr 25, 2012 at 5:19 PM, Robert Haas <robertmhaas@gmail.com> wrote: > >> Oh, we're talking about different things, and I'm slightly confused. >> >> Yes, we need to support ANALYZE; what we might not need to support, at >> least initially, is every user of a global temp table having their own >> SEPARATE copy of the table statistics. > > Yes, we are. Global Temp Tables won't solve the "Works on HS" problem, > so we'd better decide fairly quickly which use case we are addressing, > and why. ISTM Global Temp Tables is more an Oracle compatibility issue > than a problem PostgreSQL users have. > > ...I have zero basis for deciding whether what you say about Global > Temp Tables is useful or not. Well, Noah presented a pretty good outline of how to make global temp tables work under Hot Standby. As Noah already said, making regular temporary tables work under Hot Standby is far more difficult. I think he's right. I'd rather see us get global temp tables working under HS than insist we have to have regular temp tables working under HS and ultimately end up with nothing. Even getting global temp tables working under HS is probably going to require an entire development cycle, maybe two. So raising the bar still higher seems rather self-defeating to me. Half a loaf is better than none. In the interest of full disclosure, I freely admit that global temporary tables would also be a neat Oracle compatibility feature, and I do work for a company that sells Oracle compatibility products based on PostgreSQL, so there are surely some reasons for me to like that, but AFAICT they aren't all *that* heavily used by most Oracle users either, which is why I haven't been able to justify doing this project before now. The important point here as I see it is that tables of any flavor require catalog entries, and creating and destroying catalog entries on a standby server does not seem tractable, so if we want to have writable tables of any flavor on Hot Standby sometime in the next year or two, we should pick a design that doesn't require that. What Noah has proposed seems to me to be by far the simplest way of making that happen, so I think his design is spot-on. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 5:53 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> Yes, we are. Global Temp Tables won't solve the "Works on HS" problem, >> so we'd better decide fairly quickly which use case we are addressing, >> and why. ISTM Global Temp Tables is more an Oracle compatibility issue >> than a problem PostgreSQL users have. >> > Well, Noah presented a pretty good outline of how to make global temp > tables work under Hot Standby. As Noah already said, making regular > temporary tables work under Hot Standby is far more difficult. I > think he's right. I'd rather see us get global temp tables working > under HS than insist we have to have regular temp tables working under > HS and ultimately end up with nothing. Even getting global temp > tables working under HS is probably going to require an entire > development cycle, maybe two. So raising the bar still higher seems > rather self-defeating to me. Half a loaf is better than none. ... > What Noah has proposed seems to me to be by far > the simplest way of making that happen, so I think his design is > spot-on. Noah's design is spot-on for Global Temp tables, I agree. I have no objection at all to an implementation of GTTs. However, it is a fallacy that this is a good solution for using temp tables on HS. I think the wish to enhance Oracle compatibility is making some wishful thinking happen with regard to how useful this is going to be. We need to spend just as much time considering the utility of our work as we do spending time on the quality of the implementation, otherwise its just well-implemented shelfware. I don't think implementing temp tables on HS is more complex than this proposal, its just orthogonal. There are some common aspects, such as making local xids work, but that is only needed for a small fraction of the normal temp table case. So it appears to me that GTTs as proposed are actually harder to implement and not a stepping stone in a linear sense. So you could equally argue that the requirement to bring GTTs into the picture also risks us getting nothing and that the "half a loaf" idea means GTTs should be excluded. Certainly, trying to do two orthogonal tasks at once puts both at risk. So I don't accept the proposition that "GTTs are a useful implementation route for temp tables on HS", unless we're talking about a "scenic route". -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Apr 25, 2012 at 1:31 PM, Simon Riggs <simon@2ndquadrant.com> wrote: > However, it is a fallacy that this is a good solution for using temp > tables on HS. I think the wish to enhance Oracle compatibility is > making some wishful thinking happen with regard to how useful this is > going to be. We need to spend just as much time considering the > utility of our work as we do spending time on the quality of the > implementation, otherwise its just well-implemented shelfware. Well, like I say, if you want to use locally-defined temp tables on HS, you have to somehow solve the problem of catalog entries, and nothing in your email looks like a proposal for how to do that. I've come up with one design, which I sketched in my original response, but it relies on creating some new system catalogs that are themselves GTTs, and it's also hideously complicated. If you or anyone else can come up with a better design, great, but so far no one has. It's no skin off my neck if this project gets done in way that bypasses the need for GTTs; I just don't have a credible proposal for how to do that, and Noah stated that he doesn't either. I do agree that what Noah's proposing to implement is shooting at a pretty narrow target, but I don't think it's so narrow that we wouldn't commit it if he's willing to do the work to implement it. All of the infrastructure that he's proposing to create seems to me to have plausible other uses, so even if the immediate feature doesn't bring a lot of benefit there's every reason to suppose that it will pave the way for further improvements down the line. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 11:53 AM, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Apr 25, 2012 at 12:30 PM, Simon Riggs <simon@2ndquadrant.com> wrote: >> On Wed, Apr 25, 2012 at 5:19 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> >>> Oh, we're talking about different things, and I'm slightly confused. >>> >>> Yes, we need to support ANALYZE; what we might not need to support, at >>> least initially, is every user of a global temp table having their own >>> SEPARATE copy of the table statistics. >> >> Yes, we are. Global Temp Tables won't solve the "Works on HS" problem, >> so we'd better decide fairly quickly which use case we are addressing, >> and why. ISTM Global Temp Tables is more an Oracle compatibility issue >> than a problem PostgreSQL users have. >> >> ...I have zero basis for deciding whether what you say about Global >> Temp Tables is useful or not. > > Well, Noah presented a pretty good outline of how to make global temp > tables work under Hot Standby. As Noah already said, making regular > temporary tables work under Hot Standby is far more difficult. I > think he's right. I'd rather see us get global temp tables working > under HS than insist we have to have regular temp tables working under > HS and ultimately end up with nothing. Even getting global temp > tables working under HS is probably going to require an entire > development cycle, maybe two. So raising the bar still higher seems > rather self-defeating to me. Half a loaf is better than none. > > In the interest of full disclosure, I freely admit that global > temporary tables would also be a neat Oracle compatibility feature, > and I do work for a company that sells Oracle compatibility products > based on PostgreSQL, so there are surely some reasons for me to like > that, but AFAICT they aren't all *that* heavily used by most Oracle > users either, which is why I haven't been able to justify doing this > project before now. I don't know how GTT play inside the Oracle stack such that they aren't super popular, but if they work in the standby they will quickly become a killer feature. IMNSHO it's annoying but acceptable to be forced to define them into the permanent schema. Lack of temp tables on the standby is a popular question/complaint on irc and in most cases the proposal would satisfactorily address the problem. merlin
On Wed, Apr 25, 2012 at 5:46 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > > I don't know how GTT play inside the Oracle stack such that they > aren't super popular, but if they work in the standby they will > quickly become a killer feature. IMNSHO it's annoying but acceptable > to be forced to define them into the permanent schema. Lack of temp > tables on the standby is a popular question/complaint on irc and in > most cases the proposal would satisfactorily address the problem. > The problem with using GTT for this is, IMHO, that you need to know what your temp table will look before hand. I have seen applications that uses the same name (ie: temp1, t1, tt or t_temp) for all or almost all temp tables and, of course, all those have different structures. I have seen also temp tables created dinamically based in a query (which has more or less columns based on some criteria). In any case, this means for being able to use GTT on HS for these applications, the apps needs to be fixed to ensure all temp tables have different names through the app, also you need to ensure that all queries that create temp tables to have a fixed set of columns. Finally, you will need to modify apps to remove all CREATE TEMP TABLE because they already exists. And i have not mentioned the problem i will have if i need different behaviour for ON COMMIT (oh! i just did) so yes, you can workaround things to make this something usable to fix the problem of "temp tables in HS" but is not transparent (unless you come from oracle, most db's uses local temp tables just as postgres does) and certainly is not an ideal solution... FWIW, no one that i know will want to do those "fixes" in their app. -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte 24x7 y capacitación
On Wed, Apr 25, 2012 at 10:10:31AM +0100, Simon Riggs wrote: > So you are saying it is OK to not be able to *create* them on HS, just > *use* pre-defined tables? I estimated that much to cover a worthy portion of the need, yes. > That's almost useless IMHO. Based on the range of assessments spanning your "almost useless" to Merlin's "killer feature", I gather that its utility is exceptionally site-specific.
On Wed, Apr 25, 2012 at 11:49:23AM -0400, Robert Haas wrote: > On Tue, Apr 24, 2012 at 11:55 PM, Noah Misch <noah@leadboat.com> wrote: > > Our temporary tables are cataloged and filled like permanent tables. ?This has > > the major advantage of making most code operate on tables with minimal regard > > for their relpersistence. ?It also brings disadvantages: > > > > 1. High catalog turnover in rapid create/drop workloads. ?Heavy temporary > > ? table users often need to REINDEX relation-oriented catalogs. ?Hot standby > > ? cannot assign OIDs or modify system catalogs at all. > > 2. Consumption of permanent XIDs for DML on the table. ?This increases COMMIT > > ? cost on the master and is a non-starter under hot standby. > > 3. autovacuum cannot VACUUM temporary tables, but their relfrozenxid values > > ? delay pg_clog truncation and can trigger a wraparound-prevention shutdown. > > 4. sinval traffic from every CREATE TEMP TABLE et al. > > 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. > > 6. We don't automatically drop temporary tables that existed at the point of a > > ? crash, because they look much like permanent tables. > > #6 is already fixed in 9.1. Temporary tables now have names like > tBACKENDID_RELFILENODE, and are cleaned up during start-up in exactly > the same way you're propose to do it further down. Ah, so it is. That simplifies things a bit. > Maybe this is a silly idea, but if you're thinking about creating a > local XID space and a global XID space, it might be a good idea to > also make allowance for an "unlogged" XID space - that is, an XID > space that is global to all backends but need not survive crashes. > This would potentially allow unlogged tables to be used in HS mode. I > would expect that you probably don't want to go as far as actually > trying to make this work as part of your current project, but maybe as > you're structuring the code it would be worth considering the > possibility that we'll eventually want >2 XID spaces, either for this > or other reasons. Agreed. Plenty of the details would change (located in shared memory, locking, persisted on clean shutdown, etc.), so I'm not sure how much actual code could remain in common. If I encounter design decisions where one choice seems to help cover this other use in the future, I'll keep it in mind. > With respect to problem #5, I've been wondering if we couldn't just > forget about taking AccessExclusiveLock when first creating a table > (temporary or permanent). Unless and until the transaction commits, > nobody can see the catalog entry anyway, so nobody else is going to > attempt to take a conflicting lock. So why bother? Maybe there's > some reason here that's not obvious to me. If not, it might be worth > doing on general principle independent of this project. Sounds safe, offhand. I do suspect the cost of the lock is peanuts compared to the cost of inserting catalog entries, though, so I wouldn't anticipate a measurable improvement from that change in isolation. > On that note, I had a thought in the pat that it might be possible to > do solve problem #1 by using global temp tables as system catalogs - > that is, for each type of system catalog that relates to table > creation, you'd have a permanent catalog and a global temp catalog. > So if someone wants to create a temporary table of the existing > variety on the standby, you can make all the entries in the > global-temp version of pg_class, pg_attribute, etc. However, this > seems extremely difficult to manage in general - there's a lot of code > churn involved, and also possible temporary -> permanent dependencies; > for example, the temporary table might have a pg_attrdef entry that > needs to depend on a non-temporary pg_proc entry. That's tricky to > solve on the master and even trickier to solve in HS operation. So > I'm inclined to agree with you that it makes more sense to just aim to > support global temp tables in HS mode, and if we want to beat our head > against the brick wall of making regular temp tables work there > eventually, that can be a later project. Agreed. I hadn't thought of that dependencies problem. Interesting. > > Past discussions have raised the issue of interaction between commands like > > ALTER TABLE and sessions using the new-variety temporary table. ?As a first > > cut, let's keep this simple and have ongoing use of the table block operations > > requiring AccessExclusiveLock. ?Note that you can always just make a new > > temporary table with a different name to deploy a change quickly. ?Implement > > this with a heavyweight lock having a novel life cycle. ?When a session first > > takes an ordinary relation lock on the table, acquire the longer-term lock and > > schedule it for release on transaction abort. ?On TRUNCATE, schedule a release > > on transaction commit. ?Of course, also release the lock at session end. > > I'm not sure I believe this will work, but maybe I'm just not understanding it. Did you have a specific doubt? I did gloss over all the details, having not worked them out yet. > > For new-variety temporary tables, change file names from "relfilenode[_fork]" > > to "refilenode[_fork].pid.localnode". ?During crash recovery, delete all files > > conforming to that pattern for refilenodes of known temporary tables. ?This > > also lets backends share pg_class.relfilenode. ?The "localnode" field would > > distinguish multiple generations of a table across VACUUM FULL, CLUSTER, and > > TRUNCATE. ?We could build on this strategy to safely resolve point 6 for the > > existing kind of temporary table, but that entails enough other details to > > probably not mix it into the same patch. > > In lieu of including localnode in the filename, I would suggest that > for a global temp table, we set relfilenode = 0, and let each backend > assign one (and reassign new ones) whenever it feels like it. Good call. Thanks, nm
On Wed, Apr 25, 2012 at 10:04 PM, Noah Misch <noah@leadboat.com> wrote: > Based on the range of assessments spanning your "almost useless" to Merlin's > "killer feature", I gather that its utility is exceptionally site-specific. Well said, sir. It might be worth taking a couple of steps backward, here. Regardless of whether one finds the feature as proposed "almost useless" or a "killer feature", there's no denying that what you're proposing amounts to three major development projects the net payoff of which will be the ability to do an extremely limited amount of writing on the standby server. Why should we draw the line between GTTs and LTTs? What about unlogged tables? What about, for that matter, permanent tables? What about other kinds of SQL objects, such as functions? It could be useful to create any of those things on the standby, and it's going to be extremely difficult if not outright impossible to make that work, because we're building it on top of a replication architecture that is oriented around physical replication, which means that any change that makes the standby anything other than a byte-for-byte copy of the master is going to be difficult and, in some sense, a kludge. I would put this proposal in that category as well, even though I find it a rather elegant and well-thought-out kludge. It is pretty obvious that we could get around all of these problems easily if we instead did logical replication. So why not just install Slony, Bucardo, Londiste, or, if I may presume to toot my employer's horn just slightly, xDB replication server? If you use one of those products, you can not only create temporary tables on your "standby" servers, but also unlogged tables, permanent tables, and any other sort of SQL object you might want. You can also do partial replication, replication between different major versions of PostgreSQL, and replication between PostgreSQL and some other database. A given node can be the master for some tables and a slave for other tables, allowing far more deployment flexibility than you can get with streaming replication; and you have neither replication conflicts nor the necessity of replicating bloat (and the exclusive lock that you must take to remove the bloat) between machines. You can even do multi-master replication, with the right product choice. Despite all that, people still love streaming replication, because it's fast, administratively simple, and very reliable. So, I can't help wonder if what we're really missing here is a high-performance, log-based logical replication solution with good core support. I'm sure that there will be opposition to that idea from a variety of people for a variety of reasons, and that is fine. But I think we need to confront the fact that as long as we stick with physical replication, any sort of write capability on slaves is going to be a lot of work and offer only fairly marginal capabilities. We can either live with those limitations, or change our approach. Either way is fine, but I think that hoping the limitations will go away without a fundamental change in the architecture is just wishful thinking. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Apr 25, 2012 at 10:16 PM, Noah Misch <noah@leadboat.com> wrote: >> > Past discussions have raised the issue of interaction between commands like >> > ALTER TABLE and sessions using the new-variety temporary table. ?As a first >> > cut, let's keep this simple and have ongoing use of the table block operations >> > requiring AccessExclusiveLock. ?Note that you can always just make a new >> > temporary table with a different name to deploy a change quickly. ?Implement >> > this with a heavyweight lock having a novel life cycle. ?When a session first >> > takes an ordinary relation lock on the table, acquire the longer-term lock and >> > schedule it for release on transaction abort. ?On TRUNCATE, schedule a release >> > on transaction commit. ?Of course, also release the lock at session end. >> >> I'm not sure I believe this will work, but maybe I'm just not understanding it. > > Did you have a specific doubt? I did gloss over all the details, having not > worked them out yet. Not really. I think the basic idea of keeping the lock for the lifetime of the session is probably sound, modulo those details. The only problem I see is that it would prevent user A from clustering the table while user B is selecting from the table, which is not a priori necessary. It might be useful to work out a solution to that problem somehow, maybe just by jiggering the required lock levels for certain operations - perhaps CLUSTER and VACUUM could run with just RowExclusiveLock when run against a GTT, or something like that. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 25.04.2012 18:49, Robert Haas wrote: > Maybe this is a silly idea, but if you're thinking about creating a > local XID space and a global XID space, it might be a good idea to > also make allowance for an "unlogged" XID space - that is, an XID > space that is global to all backends but need not survive crashes. > This would potentially allow unlogged tables to be used in HS mode. I > would expect that you probably don't want to go as far as actually > trying to make this work as part of your current project, but maybe as > you're structuring the code it would be worth considering the > possibility that we'll eventually want>2 XID spaces, either for this > or other reasons. It would also be nice to have a separate XID space for each database in the cluster (plus one for global tables). Then one database consuming a lot of XIDs would not force anti-xid wraparound in other databases, and it would take us one step closer to being able to (physically) backup/restore/replicate a single database in a cluster. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Wed, Apr 25, 2012 at 7:34 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Apr 25, 2012 at 1:31 PM, Simon Riggs <simon@2ndquadrant.com> wrote: >> However, it is a fallacy that this is a good solution for using temp >> tables on HS. I think the wish to enhance Oracle compatibility is >> making some wishful thinking happen with regard to how useful this is >> going to be. We need to spend just as much time considering the >> utility of our work as we do spending time on the quality of the >> implementation, otherwise its just well-implemented shelfware. > > Well, like I say, if you want to use locally-defined temp tables on > HS, you have to somehow solve the problem of catalog entries, and > nothing in your email looks like a proposal for how to do that. I've > come up with one design, which I sketched in my original response, but > it relies on creating some new system catalogs that are themselves > GTTs, and it's also hideously complicated. If you or anyone else can > come up with a better design, great, but so far no one has. Previous discussions had Tom proposing ways of extending catalogs to allow exactly this. So designs that address that point are already on record. A full GTT implementation is not required and the design differed from that. I don't think "hideously complicated" is accurate, that's just you're way of saying "and I disagree". Either route is pretty complex and not much to choose between them, apart from the usefulness of the end product - GTTs are not that beneficial as a feature in themselves. The current problems of our temp table approach are 1. Catalog bloat 2. Consumption of permanent XIDs for DML on temp tables. This increases COMMIT cost on the master and is a non-starter underhot standby. 3. autovacuum cannot VACUUM temporary tables, but their relfrozenxid values delay pg_clog truncation and can trigger a wraparound-preventionshutdown. 4. sinval traffic from every CREATE TEMP TABLE et al. 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. 6. We don't automatically drop temporary tables that existed at the point of a crash, because they look much like permanenttables.2. Cross-backend access/security 7. Temp tables don't work on HS 8. No Global Temp tables Implementing GTTs solves (8) and provides some useful tools to solve other points. Note that GTTs do not themselves solve 1-7 in full, hence my point that GTTs are an endpoint not a way station. The way forwards is not to concentrate on GTTs but to provide a set of facilities that allow all the more basic points 1-6 to be addressed, in full and then solve (7) and (8). If we pretend (8) solves (7) as well, we will make mistakes in implementation that will waste time and deliver reduced value. In passing I note that GTTs are required to allow PostgresXC to support temp tables, since they need a mechanism to makes a single temp table definition work on multiple nodes with different data in each. Simply put, I don't think we should be emphasising things that are needed for PostgresXC and EDB AS, but not that important for PostgreSQL users. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Apr 26, 2012 at 5:57 AM, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Apr 25, 2012 at 10:04 PM, Noah Misch <noah@leadboat.com> wrote: >> Based on the range of assessments spanning your "almost useless" to Merlin's >> "killer feature", I gather that its utility is exceptionally site-specific. > > Well said, sir. Not well said. This is spurious. This is not a balanced point. 100% of all temp table use cases currently define them dynamically. If we implement a feature that *requires* them to all suddenly be defined statically, then we're requiring those applications to Either 1)a rewrite *all* their existing custom code 1)b Don't work with external applications that don't use GTTs (who the hell does?) 2) have problems if any of their code doesn't allow static definition Implementing a feature that *requires* those things is madness and obscuring those crucial points is not balanced or fair. Merlin is right to say that having GTTs would be a step up from what we have now, but then so would a different and actually useful implementation. So GTTs alone are not a killer feature. The killer feature comes from solving the problems of running temp tables on HS. I accept GTTs are a feature, but certainly not a killer one. If we are interested in well-principled software then we must address *both* the utility and the quality of our implementation. We cannot just shrug our way through the boring bit of feature design and then agonise over a well principled implementation. We must recognise that our wish to get on with the interesting bit gets in the way of spending sensible time balancing what we do to get the most out of our work. Noah will enjoy writing this; I just want to make sure he is enjoying something that will be useful, not just a tick box on the release notes that will cause further frustration when users find out the details. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
> Robert Haas wrote: > Noah Misch wrote: >> Based on the range of assessments spanning your "almost useless" >> to Merlin's "killer feature", I gather that its utility is >> exceptionally site-specific. > > Well said, sir. +1 > I find it a rather elegant and well-thought-out kludge. Global temporary tables as a feature are far more than a kludge; I assume you're talking about that feature as a solution for the "no temporary tables on a hot standby" problem? Even there I would consider GTT as more than a kludge. They have been part of the SQL standard since at least the SQL-93 version, and have some very clean, clear uses. > It is pretty obvious that we could get around all of these problems > easily if we instead did logical replication. > So, I can't help wonder if what we're really missing here is a > high-performance, log-based logical replication solution with good > core support. The GSoC xReader project is intended to be a major step toward that, by providing a way to translate the WAL stream to a series of notifications of logical events to clients which register with xReader. -Kevin
On Thu, Apr 26, 2012 at 3:56 AM, Simon Riggs <simon@2ndquadrant.com> wrote: > Previous discussions had Tom proposing ways of extending catalogs to > allow exactly this. So designs that address that point are already on > record. Link? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Thu, Apr 26, 2012 at 12:49 PM, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: >> So, I can't help wonder if what we're really missing here is a >> high-performance, log-based logical replication solution with good >> core support. On that, we do agree completely. That is exactly my goal for 9.3. > The GSoC xReader project is intended to be a major step toward that, > by providing a way to translate the WAL stream to a series of > notifications of logical events to clients which register with > xReader. This is already nearly finished in prototype and will be published in May. Andres Freund is working on it, copied here. It looks like there is significant overlap there. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Apr 26, 2012 at 12:49 PM, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: >> I find it a rather elegant and well-thought-out kludge. > > Global temporary tables as a feature are far more than a kludge; I > assume you're talking about that feature as a solution for the "no > temporary tables on a hot standby" problem? Even there I would > consider GTT as more than a kludge. They have been part of the SQL > standard since at least the SQL-93 version, and have some very clean, > clear uses. Just to be clear, I haven't described GTTs as a kludge. Again I say: if you want GTTs, build them. But don't build them because they solve other problems as well, because they don't, and the topic title of this thread is regrettably misnamed when we conflate GTTs with $TOPIC. If Noah wishes to solve $TOPIC rather than build GTTs then a different route is required. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Apr 26, 2012 at 9:18 AM, Simon Riggs <simon@2ndquadrant.com> wrote: > Implementing a feature that *requires* those things is madness and > obscuring those crucial points is not balanced or fair. I think this whole discussion started the wrong way around. If the goal of implementing GTTs is to solve a need with replication then yes, it's kind of weird. But GTTs solve lots of problems with our existing implementation of temporary tables. Our existing temporary tables are really normal tables that just get cleaned up automatically but incur almost all the overhead of a real table including needless heavyweight DDL in your OLTP application. It's a bad design and providing GTTs would be providing a nice feature that allows people to implement much better systems. As a side benefit they would be easy to support on a standby as well which would be a pretty nice feature. I do think storing local xids is a bit scary. It might be a dead-end if we ever want to support having persistent non-local objects in the standby database. We'll need some way to generate a separate xid space (along with other feature we don't need for GTTs of course) in that case and then the ability to store local xids and check snapshots against them would be kind of useless once we have that. -- greg
On Thu, Apr 26, 2012 at 9:57 AM, Greg Stark <stark@mit.edu> wrote: > On Thu, Apr 26, 2012 at 9:18 AM, Simon Riggs <simon@2ndquadrant.com> wrote: >> Implementing a feature that *requires* those things is madness and >> obscuring those crucial points is not balanced or fair. > > I think this whole discussion started the wrong way around. If the > goal of implementing GTTs is to solve a need with replication then > yes, it's kind of weird. But GTTs solve lots of problems with our > existing implementation of temporary tables. Our existing temporary > tables are really normal tables that just get cleaned up automatically > but incur almost all the overhead of a real table including needless > heavyweight DDL in your OLTP application. It's a bad design and > providing GTTs would be providing a nice feature that allows people to > implement much better systems. As a side benefit they would be easy to > support on a standby as well which would be a pretty nice feature. +1 to all that. > I do think storing local xids is a bit scary. It might be a dead-end > if we ever want to support having persistent non-local objects in the > standby database. We'll need some way to generate a separate xid space > (along with other feature we don't need for GTTs of course) in that > case and then the ability to store local xids and check snapshots > against them would be kind of useless once we have that. Yeah, I think the separate XID space thing is pretty complicated. On the other hand, there are lots of reasons to think it would be cool if Noah can pull it off, and I think his chances are better than most. Aside from the fact that it would improve temp tables, there's the possibility of doing an XID space per database, as Heikki mentioned, and the possibility of having an unlogged XID space also, as I mentioned. Whether any of these things are feasible and which ones actually make sense is not totally clear, but I think it's completely worthwhile for someone to do the legwork to figure it out. I also think that the idea of having persistent non-local objects on a Hot Standby database is likely a complete dead-end. By the time you've managed to engineer that, it's not going to look much like Hot Standby as we know it today any more. Perhaps we will get there eventually, but I think logical replication is a more likely bet. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Thu, Apr 26, 2012 at 3:56 AM, Simon Riggs <simon@2ndquadrant.com> wrote: >> Previous discussions had Tom proposing ways of extending catalogs to >> allow exactly this. So designs that address that point are already on >> record. > Link? There was a thread a couple years ago ... http://archives.postgresql.org/pgsql-hackers/2010-10/msg01203.php particularly here: http://archives.postgresql.org/pgsql-hackers/2010-10/msg01448.php But I have a feeling that that idea is much older. regards, tom lane
On Thu, Apr 26, 2012 at 10:12 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Thu, Apr 26, 2012 at 3:56 AM, Simon Riggs <simon@2ndquadrant.com> wrote: >>> Previous discussions had Tom proposing ways of extending catalogs to >>> allow exactly this. So designs that address that point are already on >>> record. > >> Link? > > There was a thread a couple years ago ... > http://archives.postgresql.org/pgsql-hackers/2010-10/msg01203.php > particularly here: > http://archives.postgresql.org/pgsql-hackers/2010-10/msg01448.php > > But I have a feeling that that idea is much older. Yeah, so, basically this is what I already proposed upthread: put the catalog entries for your temporary tables in system catalogs that are themselves temporary tables. One difference is that you suggested this could be done without GTTs by using LTTs that insert bootstrapping entries describing themselves when they are initialized - so you never need to modify the "real" pg_class, etc. I hadn't thought of that, and it might be feasible. I have my doubts about whether it's the path of least resistance because, as you point out (and as Noah pointed out, and as I pointed out) any kind of change that involves having moving some system catalog entries into a separate set of temporary catalogs is likely to be extremely difficult for many reasons. Noah's design is somewhat less powerful, but it avoids getting sucked down that rathole. In addition to the already-mentioned issue of temp->perm dependencies, any such design would break the assumption that every table is identified by a unique OID, which seems likely to break a whole lot of things. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Tue, Apr 24, 2012 at 11:55:15PM -0400, Noah Misch wrote: > Concerning everyone's favorite topic, how to name the new type of table, I > liked Tom's proposal[1] to make CREATE TEMP TABLE retain current behavior and > have CREATE GLOBAL TEMP TABLE and/or CREATE LOCAL TEMP TABLE request the new > SQL-standard variety. (I'd vote for using CREATE GLOBAL and retaining CREATE > LOCAL for future expansion.) As he mentions, to get there, we'd ideally start > by producing a warning instead of silently accepting GLOBAL as a noise word. > Should we put such a warning into 9.2? Here is the change I'd make. Thanks, nm > [1] http://archives.postgresql.org/message-id/5422.1240936705@sss.pgh.pa.us
Attachment
On 4/25/12 6:16 AM, Simon Riggs wrote: >> The way standard-like temporary tables work is exactly why I assume >> > Noah proposes to implement them: because they work nicely with HS. > Well, following a standard that no other major DBMS has followed is > not great, especially if it leads to a non-useful feature. Actually, Oracle's temp tables are standard-compliant. You must explicitly define them via DDL and they remain in the catalog. -- Jim C. Nasby, Database Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
On 4/25/12 6:15 PM, Jaime Casanova wrote: > On Wed, Apr 25, 2012 at 5:46 PM, Merlin Moncure<mmoncure@gmail.com> wrote: >> > >> > I don't know how GTT play inside the Oracle stack such that they >> > aren't super popular, but if they work in the standby they will >> > quickly become a killer feature. IMNSHO it's annoying but acceptable >> > to be forced to define them into the permanent schema. Lack of temp >> > tables on the standby is a popular question/complaint on irc and in >> > most cases the proposal would satisfactorily address the problem. >> > > The problem with using GTT for this is, IMHO, that you need to know > what your temp table will look before hand. > > I have seen applications that uses the same name (ie: temp1, t1, tt or > t_temp) for all or almost all temp tables and, of course, all those > have different structures. Not to pick on Jaime, but this is the 2nd or 3rd comment I've seen about applications using temp tables. How many of these applications are actually trying to function on a read-only slave? ISTM that an *application* would likely not need to modify it's use of temp tables (unless we actually rip out support forhow they currently work, but I don't think that's on the table). Now, reporting may well be a different story... we can't use streaming replication at work because of the lack of temp tables,but GTT's would not really solve our problem either. That said, the places in our application where we use temp tablesit would actually be easier for us to use GTT's rather than LTT's. -- Jim C. Nasby, Database Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
On Thu, Apr 26, 2012 at 08:56:40AM +0100, Simon Riggs wrote: > A full GTT implementation is not required and the design differed from > that. I don't think "hideously complicated" is accurate, that's just > you're way of saying "and I disagree". Either route is pretty complex > and not much to choose between them, apart from the usefulness of the > end product - GTTs are not that beneficial as a feature in themselves. > > The current problems of our temp table approach are > 1. Catalog bloat > 2. Consumption of permanent XIDs for DML on temp tables. This increases COMMIT > cost on the master and is a non-starter under hot standby. > 3. autovacuum cannot VACUUM temporary tables, but their relfrozenxid values > delay pg_clog truncation and can trigger a wraparound-prevention shutdown. > 4. sinval traffic from every CREATE TEMP TABLE et al. > 5. CREATE TEMP TABLE records AccessExclusiveLock in the global lock table. > 6. We don't automatically drop temporary tables that existed at the point of a > crash, because they look much like permanent tables.2. Cross-backend > access/security > 7. Temp tables don't work on HS > 8. No Global Temp tables > > Implementing GTTs solves (8) and provides some useful tools to solve > other points. Note that GTTs do not themselves solve 1-7 in full, > hence my point that GTTs are an endpoint not a way station. The way > forwards is not to concentrate on GTTs but to provide a set of > facilities that allow all the more basic points 1-6 to be addressed, > in full and then solve (7) and (8). If we pretend (8) solves (7) as > well, we will make mistakes in implementation that will waste time and > deliver reduced value. > > In passing I note that GTTs are required to allow PostgresXC to > support temp tables, since they need a mechanism to makes a single > temp table definition work on multiple nodes with different data in > each. > > Simply put, I don't think we should be emphasising things that are > needed for PostgresXC and EDB AS, but not that important for > PostgreSQL users. I think if implementing global temporary tables only for hot standby user (#7), it might be of limited usefulness, but the ability to avoid system table churn (#1) means global temporary tables would have a wide usefulness, even without hot standby use. The idea of sharing optimizer statistics also has a lot of merit. FYI, global temp tables have been on the TODO list long before EDB employment or PostgresXC. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
What is the use case for temporary tables on a hot standby server?<br /><br />Perhaps this is a noobie question, but it seemsto me that a hot standby server's use by<b> applications</b> or <b>users</b> should be limited to transactions thatdon't alter the database in any form.<br /><br />However, I can see where temporary tables might be needed at the systemlevel (if not already available) in order to prepare more efficient plans for some complex read-only queries.<br />--<br/>Mike Nolan<br />
On Wed, May 2, 2012 at 11:39 AM, Michael Nolan <htfoot@gmail.com> wrote: > What is the use case for temporary tables on a hot standby server? Simple... We required a "hot standby" server in order to get improved reliability. But we don't want it to sit there chewing power + money, unused. We want to *use* it to support our reporting applications. And the developers used temporary tables to marshal results used in some of those reports. There are conflicting senses of "read-only" here... - In one strict sense, to generate tuples in temp tables means it's not "read only" access. - But since the users running reports aren't allowed to modify the data in the application tables that they are querying, how is that *not* fairly characterized as "read only" access??? -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"
On Wed, May 2, 2012 at 4:14 PM, Bruce Momjian <bruce@momjian.us> wrote: > I think if implementing global temporary tables only for hot standby > user (#7), it might be of limited usefulness, but the ability to avoid > system table churn (#1) means global temporary tables would have a wide > usefulness, even without hot standby use. Yes, many parts of Noah's proposal would be useful for normal running. And as Jim points out, they are SQL Standard, as well as supported by Oracle and DB2, so again GTTs would hit that TODO item. And Noah's proposal does take us more than 50%, maybe 80% of the way to what I think would be most useful for HS. So overall, I do encourage the proposal. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Michael, > What is the use case for temporary tables on a hot standby server? > > Perhaps this is a noobie question, but it seems to me that a hot standby > server's use by* applications* or *users* should be limited to transactions > that don't alter the database in any form. A very common use for asynchronous replicas is to offload long-running reporting jobs onto the replica so that they don't bog down the master.However, long-running reporting jobs often requiretemporary tables, especially if they use some 3rd-party vendor's reporting tool. For example, the average Microstrategy report involves between 1 and 12 temporary tables. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On Thu, May 3, 2012 at 1:57 AM, Josh Berkus <josh@agliodbs.com> wrote: > Michael, > >> What is the use case for temporary tables on a hot standby server? >> >> Perhaps this is a noobie question, but it seems to me that a hot standby >> server's use by* applications* or *users* should be limited to transactions >> that don't alter the database in any form. > > A very common use for asynchronous replicas is to offload long-running > reporting jobs onto the replica so that they don't bog down the master. > However, long-running reporting jobs often require temporary tables, > especially if they use some 3rd-party vendor's reporting tool. For > example, the average Microstrategy report involves between 1 and 12 > temporary tables. Many tools and applications choose to use temporary tables. Often this isn't necessary at all, for example in MicroStrategy it is possible to ask it to use derived tables instead and thus avoid using temp tables, so that can still work against Hot Standby. Derived tables means rewriting the query from CREATE TEMP TABLE s1 AS <SELECT1>; SELECT ... FROM s1 WHERE ... into SELECT ... FROM (<SELECT1>) AS s1 WHERE Many apps are easily rewritten in this way and so the lack of temp tables isn't a total blocker in the way some people think. If we had Global Temp Tables, users would still need to rewrite their code, just in a different way, like this... (on master) CREATE GLOBAL TEMP TABLE s1 (....); (on standby) INSERT INTO s1 <SELECT1>; SELECT ... FROM s1 WHERE ... which seems to me to be actually harder than just rewriting as derived table and isn't an option on Microstrategy etc, hence my observation that GTTs don't help HS much. What I would like to see, one day, is for temp tables to work without any changes. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, May 3, 2012 at 4:11 AM, Simon Riggs <simon@2ndquadrant.com> wrote: > which seems to me to be actually harder than just rewriting as derived > table and isn't an option on Microstrategy etc, hence my observation > that GTTs don't help HS much. What I would like to see, one day, is > for temp tables to work without any changes. yes. except (global) temp tables can: *) be updated with data not derived from permanent tables (think: session management, etc) *) outlive a single transaction *) be indexed independently of the sourcing tables *) be referred to from multiple queries (you can simulate this with CTE, but that approach obviously has limits) of course, temp tables that would 'just work' would be wonderful. merlin
> (on standby) > INSERT INTO s1 <SELECT1>; > SELECT ... FROM s1 WHERE ... > which seems to me to be actually harder than just rewriting as derived > table and isn't an option on Microstrategy etc, hence my observation > that GTTs don't help HS much. What I would like to see, one day, is > for temp tables to work without any changes. 100% agreement. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On Tue, Apr 24, 2012 at 10:55 PM, Noah Misch <noah@leadboat.com> wrote: > A key barrier to migrations from trigger-based replication to WAL-based > replication is the lack of temporary tables under hot standby. I'd like to > close that gap; the changes needed will also reduce the master-side cost of > temporary table usage. Here is a high-level design for your advice and > comments. Much of this builds on ideas from past postings noted below. On Mon, May 7, 2012 at 8:52 AM, Michael Nolan <htfoot@gmail.com> wrote: > To cross-pollinate with another thread, if temporary tables (and > insert/delete/update transactions to them) are to be supported on a slave, > will the applications using those temporary tables expect to be able to use > 'nextval' on inserts to temporary tables as well? That's a very good question. I'm moving it -hackers...how do non table temporary objects work? Do you have CREATE GLOBAL TEMPORARY SEQUENCE? etc. My understanding of the current proposal is that the sequence (along with the rest of the table) has to be defined in the master as a global temporary table. It seems that it wouldn't be possible or desirable to serialize sequence fetches between the master and standby servers, but I'm not sure about that. merlin
On Mon, May 07, 2012 at 09:04:28AM -0500, Merlin Moncure wrote: > On Mon, May 7, 2012 at 8:52 AM, Michael Nolan <htfoot@gmail.com> wrote: > > To cross-pollinate with another thread, if temporary tables (and > > insert/delete/update transactions to them) are to be supported on a slave, > > will the applications using those temporary tables expect to be able to use > > 'nextval' on inserts to temporary tables as well? > > That's a very good question. I'm moving it -hackers...how do non > table temporary objects work? Do you have CREATE GLOBAL TEMPORARY > SEQUENCE? etc. Probably so, yes. > My understanding of the current proposal is that the sequence (along > with the rest of the table) has to be defined in the master as a > global temporary table. Yes. > It seems that it wouldn't be possible or > desirable to serialize sequence fetches between the master and standby > servers, but I'm not sure about that. Like global temporary tables, each session has an independent copy of each global temporary sequence. "nextval" would have no interaction with other local backends, let alone remote ones.
On Sun, Apr 29, 2012 at 4:02 PM, Noah Misch <noah@leadboat.com> wrote: > On Tue, Apr 24, 2012 at 11:55:15PM -0400, Noah Misch wrote: >> Concerning everyone's favorite topic, how to name the new type of table, I >> liked Tom's proposal[1] to make CREATE TEMP TABLE retain current behavior and >> have CREATE GLOBAL TEMP TABLE and/or CREATE LOCAL TEMP TABLE request the new >> SQL-standard variety. (I'd vote for using CREATE GLOBAL and retaining CREATE >> LOCAL for future expansion.) As he mentions, to get there, we'd ideally start >> by producing a warning instead of silently accepting GLOBAL as a noise word. >> Should we put such a warning into 9.2? > > Here is the change I'd make. This is listed on the open items list. I haven't ever heard anyone propose to redefine CREATE LOCAL TEMP TABLE to mean anything different than CREATE TEMP TABLE, so I'm disinclined to warn about that. I would be more open to warning people about CREATE GLOBAL TEMP TABLE - frankly, it's pretty wonky that we allow that but treat GLOBAL as a noise word in this first place. But I'm a little disinclined to have the message speculate about what might happen in future versions of PostgreSQL. Such predictions don't have a very good track record of being accurate. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 8 June 2012 18:26, Robert Haas <robertmhaas@gmail.com> wrote: > I would be more open to warning people about CREATE GLOBAL TEMP TABLE > - frankly, it's pretty wonky that we allow that but treat GLOBAL as a > noise word in this first place. But I'm a little disinclined to have > the message speculate about what might happen in future versions of > PostgreSQL. Such predictions don't have a very good track record of > being accurate. Agreed. We should make use of GLOBAL throw an ERROR: feature not yet implemented, in preparation for what might one day happen. We don't know the future but we do know the present. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Simon Riggs <simon@2ndQuadrant.com> wrote: > On 8 June 2012 18:26, Robert Haas <robertmhaas@gmail.com> wrote: > >> I would be more open to warning people about CREATE GLOBAL TEMP >> TABLE - frankly, it's pretty wonky that we allow that but treat >> GLOBAL as a noise word in this first place. But I'm a little >> disinclined to have the message speculate about what might happen >> in future versions of PostgreSQL. Such predictions don't have a >> very good track record of being accurate. > > Agreed. > > We should make use of GLOBAL throw an ERROR: feature not yet > implemented, in preparation for what might one day happen. We > don't know the future but we do know the present. +1 It has always bothered me that we support GLOBAL there without coming anywhere near matching the semantics of GTTs. -Kevin
On Fri, Jun 08, 2012 at 01:26:20PM -0400, Robert Haas wrote: > On Sun, Apr 29, 2012 at 4:02 PM, Noah Misch <noah@leadboat.com> wrote: > > On Tue, Apr 24, 2012 at 11:55:15PM -0400, Noah Misch wrote: > >> Concerning everyone's favorite topic, how to name the new type of table, I > >> liked Tom's proposal[1] to make CREATE TEMP TABLE retain current behavior and > >> have CREATE GLOBAL TEMP TABLE and/or CREATE LOCAL TEMP TABLE request the new > >> SQL-standard variety. ?(I'd vote for using CREATE GLOBAL and retaining CREATE > >> LOCAL for future expansion.) ?As he mentions, to get there, we'd ideally start > >> by producing a warning instead of silently accepting GLOBAL as a noise word. > >> Should we put such a warning into 9.2? > > > > Here is the change I'd make. > > This is listed on the open items list. > > I haven't ever heard anyone propose to redefine CREATE LOCAL TEMP > TABLE to mean anything different than CREATE TEMP TABLE, so I'm > disinclined to warn about that. From a documentation perspective, it will be awkward to explain (or decline to explain) that both GLOBAL TEMPORARY and LOCAL TEMPORARY are standard syntaxes with non-standard behavior, only one of which emits a warning. That unduly telegraphs a prediction about which one will change first/ever. Maybe that's nonetheless the right pragmatic answer. > I would be more open to warning people about CREATE GLOBAL TEMP TABLE > - frankly, it's pretty wonky that we allow that but treat GLOBAL as a > noise word in this first place. But I'm a little disinclined to have > the message speculate about what might happen in future versions of > PostgreSQL. Such predictions don't have a very good track record of > being accurate. I feel the predictions in question ("This may specify different semantics in future versions of PostgreSQL." and "This usage is deprecated and may specify standard-compliant behavior in the future.") were broad enough to mitigate this concern. If we ever do change the interpretation of this syntax, to what could it be other than the standard behavior? We're not likely to introduce a different but still-nonstandard behavior for this standard syntax. I wrote the verbiage that way for the benefit of users encountering the new warning. They might reasonably ask, "Why did the PostgreSQL developers create this work for me?" No objection to removing the errhint, but I think the documentation wording should stay. Concerning whether to make this a WARNING or an ERROR, does anyone still object to WARNING? Thanks, nm
Noah Misch <noah@leadboat.com> writes: > On Fri, Jun 08, 2012 at 01:26:20PM -0400, Robert Haas wrote: >> I haven't ever heard anyone propose to redefine CREATE LOCAL TEMP >> TABLE to mean anything different than CREATE TEMP TABLE, so I'm >> disinclined to warn about that. > From a documentation perspective, it will be awkward to explain (or decline to > explain) that both GLOBAL TEMPORARY and LOCAL TEMPORARY are standard syntaxes > with non-standard behavior, only one of which emits a warning. Yeah. If we're going to touch this at all, I think we should warn about both, because they are both being interpreted in a non-standards-compliant fashion. It's possible that different message texts would be appropriate, though. If we create the infrastructure necessary to make GLOBAL TEMP standards-compliant, it would not be totally unreasonable (IMO) to make LOCAL TEMP act like GLOBAL TEMP. It would still be non-compliant, but closer than it is today. Moreover, if you argue that the whole session is one SQL module, it could actually be seen as compliant, in a subsetty kind of way. (Or so I think; but I've not read the relevant parts of the spec very recently either.) regards, tom lane