Already done in an earlier post, Kevin - I have included it again below. As
you can see, it's pretty well wqhat you would expect, index scan plus a
filter.
One note: updates where no rows qualify run appreciably faster than the ones
that do. That is, the update itself appears to be consuming a good deal of
the processing time. This may be due to the 6 indexes.
UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''
Index Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1
width=608) (actual time=0.081..0.244 rows=10 loops=1)
Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=
319400010))
Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms
""Kevin Grittner"" <Kevin.Grittner@wicourts.gov> wrote in message
news:4B462563020000250002DFA3@gw.wicourts.gov...
> "Carlo Stonebanks" <stonec.register@sympatico.ca> wrote:
>
>> An interesting idea, if I can confirm that the performance problem
>> is because of the WHERE clause, not the UPDATE.
>
> If you could show EXPLAIN ANALYZE output for one iteration, with
> related queries and maybe more info on the environment, it would
> take most of the guesswork out of things.
>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>