Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From Carlo Stonebanks
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id hi56qq$1igu$1@news.hub.org
Whole thread Raw
In response to Re: Massive table (500M rows) update nightmare  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-performance
> Got an explain analyze of the delete query?

UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''

Index Scan using audit_impt_pkey on audit_impt  (cost=0.00..92.63 rows=1
width=608) (actual time=0.081..0.244 rows=10 loops=1)
  Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=
319400010))
  Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms

Hard to tell how reliable these numbers are, because the caches are likely
spun up for the WHERE clause - in particular, SELECT queries have been run
to test whether the rows actually qualify for the update.

The coalesce may be slowing things down slightly, but is a necessary evil.


pgsql-performance by date:

Previous
From: Nikolas Everett
Date:
Subject: Re: Air-traffic benchmark
Next
From: "Carlo Stonebanks"
Date:
Subject: Re: Massive table (500M rows) update nightmare