Re: exceptionally large UPDATE - Mailing list pgsql-general

From Vick Khera
Subject Re: exceptionally large UPDATE
Date
Msg-id AANLkTi==_bprDq-yhYCuRBUuESuAh0nHAFom4bvpFuhM@mail.gmail.com
Whole thread Raw
In response to exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Responses Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
List pgsql-general
On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
<mail@webthatworks.it> wrote:
> I'm increasing maintenance_work_mem to 180MB just before recreating
> the gin index. Should it be more?
>

You can do this on a per-connection basis; no need to alter the config
file.  At the psql prompt (or via your script) just execute the query

SET maintenance_work_mem="180MB"

If you've got the RAM, just use more of it.  'd suspect your server
has plenty of it, so use it!  When I reindex, I often give it 1 or 2
GB.  If you can fit the whole table into that much space, you're going
to go really really fast.

Also, if you are going to update that many rows you may want to
increase your checkpoint_segments.  Increasing that helps a *lot* when
you're loading big data, so I would expect updating big data may also
be helped.  I suppose it depends on how wide your rows are.  1.5
Million rows is really not all that big unless you have lots and lots
of text columns.

pgsql-general by date:

Previous
From: Thom Brown
Date:
Subject: Re: Can't take base back up with Postgres 9.0 on Solaris 10
Next
From: trevor1940
Date:
Subject: PostGIS return multiple points