Re: UPDATE many records - Mailing list pgsql-general

From Michael Lewis
Subject Re: UPDATE many records
Date
Msg-id CAHOFxGp6akB5MXfah7PrLbU+tCYe6g1Rua_y3e3usw-9vyZqrw@mail.gmail.com
Whole thread Raw
In response to Re: UPDATE many records  (Israel Brewster <ijbrewster@alaska.edu>)
Responses Re: UPDATE many records
List pgsql-general
I’m thinking it might be worth it to do a “quick” test on 1,000 or so records (or whatever number can run in a minute or so), watching the processor utilization as it runs. That should give me a better feel for where the bottlenecks may be, and how long the entire update process would take. I’m assuming, of course, that the total time would scale more or less linearly with the number of records.

I think that depends on how your identify and limit the update to those 1000 records. If it is using a primary key with specific keys in an array, probably close to linear increase because the where clause isn't impactful to the overall execution time. If you write a sub-query that is slow, then you would need to exclude that from the time. You can always run explain analyze on the update and rollback rather than commit.

pgsql-general by date:

Previous
From: Israel Brewster
Date:
Subject: Re: UPDATE many records
Next
From: Israel Brewster
Date:
Subject: Re: UPDATE many records