Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From marcin mank
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id b1b9fac61001071305vf182f3ajff6827f92c943c68@mail.gmail.com
Whole thread Raw
In response to Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Responses Re: Massive table (500M rows) update nightmare
List pgsql-performance
> every update is a UPDATE ... WHERE id
>>= x AND id < x+10 and a commit is performed after every 1000 updates
> statement, i.e. every 10000 rows.

What is the rationale behind this? How about doing 10k rows in 1
update, and committing every time?

You could try making the condition on the ctid column, to not have to
use the index on ID, and process the rows in physical order. First
make sure that newly inserted production data has the correct value in
the new column, and add 'where new_column is null' to the conditions.
But I have never tried this, use at Your own risk.

Greetings
Marcin Mank

pgsql-performance by date:

Previous
From: Robert Haas
Date:
Subject: Re: noob inheritance question
Next
From: "Carlo Stonebanks"
Date:
Subject: Re: Massive table (500M rows) update nightmare