Robert James wrote on 27.10.2013 20:47:
> I'm using Postgres for data analysis (interactive and batch). I need
> to focus the analysis on a subset of one table, and, for both
> performance and simplicity, have a function which loads that subset
> into another table (DELETE FROM another_table; INSERT INTO
> another_table SELECT ...).
>
> Oddly enough, although the SELECT itself is very quick (< 1 s), the
> DELETE and INSERT can take over a minute! I can't figure out why.
> another_table is simple: it has only 7 fields. Two of those fields
> are indexed, using a simple one field standard index. There are no
> triggers on it.
>
> What is the cause of this behavior? What should I do to make this
> faster? Is there a recommended work around?
>
> (I'm hesitant to drop another_table and recreate it each time, since
> many views depend on it.)
DELETE can be a quite lengthy thing to do - especially with a large number of rows.
If you use TRUNCATE instead, this will be *much* quicker with the additional benefit,
that if you INSERT the rows in the same transaction, the INSERT will require much less
I/O because it's not logged.