Re: [PERFORM] Big IN() clauses etc : feature proposal - Mailing list pgsql-hackers

From PFC
Subject Re: [PERFORM] Big IN() clauses etc : feature proposal
Date
Msg-id op.s9eot5xvcigqcu@apollo13
Whole thread Raw
In response to Re: [PERFORM] Big IN() clauses etc : feature proposal  ("Jim C. Nasby" <jnasby@pervasive.com>)
List pgsql-hackers

>> > 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id
>> DESC
>> > LIMIT 20
>> > 0.443 ms ANALYZE tmp
>> > 0.365 ms SELECT * FROM tmp
>> > 0.310 ms DROP TABLE tmp
>> > 32.918 ms COMMIT

>> The 32 seconds for commit can hardly be catalog related. It seems the
>> file is
>> fsynced before it is dropped.
>
> I'd hope that wasn't what's happening... is the backend smart enough to
> know not to fsync anything involved with the temp table? ISTM that that
> transaction shouldn't actually be creating any WAL traffic at all.
> Though on the other hand there's no reason that DROP should be in the
> transaction at all; maybe that's gumming things up during the commit.

    I included the DROP to make it clear that the time was spent in
COMMITting, not in DROPping the table.
    Also, you can't use CREATE TEMP TABLE AS SELECT ... and at the same time
make it ON COMMIT DROP. You have to CREATE and INSERT.
    With an ON COMMIT DROP temp table, the global timings are the same wether
or not it is dropped before commit : it is always the COMMIT which takes
all the milliseconds.

    I still bet on system catalog updates being the main cause of the time
spent in COMMIT...
    (because ANALYZE changes this time)

pgsql-hackers by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: hashagg, statistisics and excessive memory allocation
Next
From: "Jim C. Nasby"
Date:
Subject: Re: Compressing table images