Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From Carlo Stonebanks
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id hi6ho6$17nn$1@news.hub.org
Whole thread Raw
In response to Re: Massive table (500M rows) update nightmare  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: Massive table (500M rows) update nightmare
List pgsql-performance
Already done in an earlier post, Kevin - I have included it again below. As
you can see, it's pretty well wqhat you would expect, index scan plus a
filter.

One note: updates where no rows qualify run appreciably faster than the ones
that do. That is, the update itself appears to be consuming a good deal of
the processing time. This may be due to the 6 indexes.

UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''

Index Scan using audit_impt_pkey on audit_impt  (cost=0.00..92.63 rows=1
width=608) (actual time=0.081..0.244 rows=10 loops=1)
  Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=
319400010))
  Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms




""Kevin Grittner"" <Kevin.Grittner@wicourts.gov> wrote in message
news:4B462563020000250002DFA3@gw.wicourts.gov...
> "Carlo Stonebanks" <stonec.register@sympatico.ca> wrote:
>
>> An interesting idea, if I can confirm that the performance problem
>> is because of the WHERE clause, not the UPDATE.
>
> If you could show EXPLAIN ANALYZE output for one iteration, with
> related queries and maybe more info on the environment, it would
> take most of the guesswork out of things.
>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


pgsql-performance by date:

Previous
From: "Carlo Stonebanks"
Date:
Subject: Re: Massive table (500M rows) update nightmare
Next
From: Scott Marlowe
Date:
Subject: Re: Massive table (500M rows) update nightmare