Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id 4B46E8A7020000250002E012@gw.wicourts.gov
Whole thread Raw
In response to Re: Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Responses Re: Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
List pgsql-performance
"Carlo Stonebanks" <stonec.register@sympatico.ca> wrote:

> Already done in an earlier post

Perhaps I misunderstood; I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update.  If that was the case, it might be worthwhile to look at
the entire flow of an iteration.

Also, if you ever responded with version and configuration
information, I missed it.  The solution to parts of what you
describe would be different in different versions.  In particular,
you might be able to solve checkpoint-related lockup issues and then
improve performance by using bigger batches.  Right now I would be
guessing at what might work for you.

-Kevin

pgsql-performance by date:

Previous
From: Eduardo Morras
Date:
Subject: Re: Massive table (500M rows) update nightmare
Next
From: Rui Carvalho
Date:
Subject: Re: Array comparison