Thread: Re: [PERFORM] Big IN() clauses etc : feature proposal
> Something else worth considering is not using the normal > catalog methods > for storing information about temp tables, but hacking that together > would probably be a rather large task. But the timings suggest, that it cannot be the catalogs in the worst case he showed. > 0.101 ms BEGIN > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP 1.4 seconds is not great for create table, is that what we expect ? > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC > LIMIT 20 > 0.443 ms ANALYZE tmp > 0.365 ms SELECT * FROM tmp > 0.310 ms DROP TABLE tmp > 32.918 ms COMMIT > > CREATING the table is OK, but what happens on COMMIT ? I hear the disk > seeking frantically. The 32 seconds for commit can hardly be catalog related. It seems the file is fsynced before it is dropped. Andreas
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote: > > 0.101 ms BEGIN > > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER > NOT > > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP > > 1.4 seconds is not great for create table, is that what we expect ? Hmm, I'm hoping ms means milliseconds... -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
On Thu, May 11, 2006 at 09:55:15AM +0200, Zeugswetter Andreas DCP SD wrote: > > > Something else worth considering is not using the normal > > catalog methods > > for storing information about temp tables, but hacking that together > > would probably be a rather large task. > > But the timings suggest, that it cannot be the catalogs in the worst > case > he showed. > > > 0.101 ms BEGIN > > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER > NOT > > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP > > 1.4 seconds is not great for create table, is that what we expect ? milliseconds... :) Given the amount of code and locking that it looks like is involved in creating a table, that might not be unreasonable... > > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id > DESC > > LIMIT 20 > > 0.443 ms ANALYZE tmp > > 0.365 ms SELECT * FROM tmp > > 0.310 ms DROP TABLE tmp > > 32.918 ms COMMIT > > > > CREATING the table is OK, but what happens on COMMIT ? I hear > the disk > > seeking frantically. > > The 32 seconds for commit can hardly be catalog related. It seems the > file is > fsynced before it is dropped. I'd hope that wasn't what's happening... is the backend smart enough to know not to fsync anything involved with the temp table? ISTM that that transaction shouldn't actually be creating any WAL traffic at all. Though on the other hand there's no reason that DROP should be in the transaction at all; maybe that's gumming things up during the commit. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
>> > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id >> DESC >> > LIMIT 20 >> > 0.443 ms ANALYZE tmp >> > 0.365 ms SELECT * FROM tmp >> > 0.310 ms DROP TABLE tmp >> > 32.918 ms COMMIT >> The 32 seconds for commit can hardly be catalog related. It seems the >> file is >> fsynced before it is dropped. > > I'd hope that wasn't what's happening... is the backend smart enough to > know not to fsync anything involved with the temp table? ISTM that that > transaction shouldn't actually be creating any WAL traffic at all. > Though on the other hand there's no reason that DROP should be in the > transaction at all; maybe that's gumming things up during the commit. I included the DROP to make it clear that the time was spent in COMMITting, not in DROPping the table. Also, you can't use CREATE TEMP TABLE AS SELECT ... and at the same time make it ON COMMIT DROP. You have to CREATE and INSERT. With an ON COMMIT DROP temp table, the global timings are the same wether or not it is dropped before commit : it is always the COMMIT which takes all the milliseconds. I still bet on system catalog updates being the main cause of the time spent in COMMIT... (because ANALYZE changes this time)
"Jim C. Nasby" <jnasby@pervasive.com> writes: > I'd hope that wasn't what's happening... is the backend smart enough to > know not to fsync anything involved with the temp table? The catalog entries required for it have to be fsync'd, unless you enjoy putting your entire database at risk (a bad block in pg_class, say, would probably take out more than one table). It's interesting to speculate about keeping such catalog entries in child tables of pg_class etc that are themselves temp tables. Resolving the apparent circularity of this is left as an exercise for the reader. regards, tom lane
On Thu, May 11, 2006 at 06:08:36PM -0400, Tom Lane wrote: > "Jim C. Nasby" <jnasby@pervasive.com> writes: > > I'd hope that wasn't what's happening... is the backend smart enough to > > know not to fsync anything involved with the temp table? > > The catalog entries required for it have to be fsync'd, unless you enjoy > putting your entire database at risk (a bad block in pg_class, say, > would probably take out more than one table). Yeah, thought about that after sending... :( > It's interesting to speculate about keeping such catalog entries in > child tables of pg_class etc that are themselves temp tables. Resolving > the apparent circularity of this is left as an exercise for the reader. Well, since it'd be a system table with a fixed OID there could presumably be a special case in the recovery code for it, though that's pretty fugly sounding. Another alternative would be to support global temp tables... I think that would handle all the complaints of the OP except for the cost of analyze. I suspect this would be easier to do than creating a special type of temp table that used tuplestore instead of the full table framework, and it'd certainly be more general-purpose. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
1.451 ms = 1.451 milliseconds 1451.0 ms = 1.451 seconds ... so 32.918 ms for a commit seems perhaps reasonable ? Greg Williamson DBA GlobeXplorer LLC -----Original Message----- From: pgsql-performance-owner@postgresql.org on behalf of Zeugswetter Andreas DCP SD Sent: Thu 5/11/2006 12:55 AM To: Jim C. Nasby; PFC Cc: Greg Stark; Tom Lane; pgsql-performance@postgresql.org; pgsql-hackers@postgresql.org Subject: Re: [PERFORM] [HACKERS] Big IN() clauses etc : feature proposal > Something else worth considering is not using the normal > catalog methods > for storing information about temp tables, but hacking that together > would probably be a rather large task. But the timings suggest, that it cannot be the catalogs in the worst case he showed. > 0.101 ms BEGIN > 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT > NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP 1.4 seconds is not great for create table, is that what we expect ? > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC > LIMIT 20 > 0.443 ms ANALYZE tmp > 0.365 ms SELECT * FROM tmp > 0.310 ms DROP TABLE tmp > 32.918 ms COMMIT > > CREATING the table is OK, but what happens on COMMIT ? I hear the disk > seeking frantically. The 32 seconds for commit can hardly be catalog related. It seems the file is fsynced before it is dropped. Andreas ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq !DSPAM:446c0a75172664042098162!