>
> I didn't investigate your particular issue but generally speaking creating a
> table, even a temporary one, is an expensive operation.
>
> Note that it's far from being a seperate file on the disk. It affects catalog
> tables, shared buffers, all the corresponding locks, etc. If you have indexes
> for a temporary table it makes the situation ever worse. Sooner or later
> VACUUM will happen for your bloated catalog, and this is not fun under
> heavy load.
Yes, creation is expensive, but note that this test case does not create
the temp table for every transaction. It creates it once on startup of the
connection. So there'll be no catalog bloat as the connections are generally
long-lived. The part that remains to be expensive (but which was unexpected
to me) is the truncate that does happen for every transaction.
>
> Is there any particular reason why you don't want to simply change the target
> table directly? If you do it in a transaction you are safe.
>
This is a fair point which I didn't explain in my earlier message. We find that for
various use cases the temp table approach is a really convenient way of interacting
with Postgres. You can just easily binary COPY whatever data you want into it without
looking too much at data size and then handling it from there on the db-side.
Doing the same without this, you'd need to resort to passing the data as query parameters, but
this has complications like a limit in number of bound params per query. It's definitely possible
to use as well, but the binary COPY to temp table is quite convenient.
-Floris