UPDATE many records - Mailing list pgsql-general

From Israel Brewster
Subject UPDATE many records
Date
Msg-id 97518FCA-F286-4A6A-803F-46A109196C71@alaska.edu
Whole thread Raw
Responses Re: UPDATE many records
Re: UPDATE many records
Re: UPDATE many records
RE: UPDATE many records
List pgsql-general
Thanks to a change in historical data, I have a need to update a large number of records (around 50 million). The update itself is straight forward, as I can just issue an "UPDATE table_name SET changed_field=new_value();" (yes, new_value is the result of a stored procedure, if that makes a difference) command via psql, and it should work. However, due to the large number of records this command will obviously take a while, and if anything goes wrong during the update (one bad value in row 45 million, lost connection, etc), all the work that has been done already will be lost due to the transactional nature of such commands (unless I am missing something).

Given that each row update is completely independent of any other row, I have the following questions:

1) Is there any way to set the command such that each row change is committed as it is calculated?
2) Is there some way to run this command in parallel in order to better utilize multiple processor cores, other than manually breaking the data into chunks and running a separate psql/update process for each chunk? Honestly, manual parallelizing wouldn’t be too bad (there are a number of logical segregations I can apply), I’m just wondering if there is a more automatic option.
---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

pgsql-general by date:

Previous
From: Mike Lissner
Date:
Subject: Re: How to shorten a chain of logically replicated servers
Next
From: Justin
Date:
Subject: Re: UPDATE many records