Re: [PERFORM] 答复: [PERFORM] Postgresql update op is very very slow - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: [PERFORM] 答复: [PERFORM] Postgresql update op is very very slow
Date
Msg-id dcc563d10806260902o581ec3afud773867c1d0d2c69@mail.gmail.com
Whole thread Raw
In response to Re: 答复: [PERFORM] Postgresql update op is very very slow  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
2008/6/26 Tom Lane <tgl@sss.pgh.pa.us>:
> "jay" <jackem.mojx@alibaba-inc.com> writes:
>>       I know the problem, because there are about 35 million rows , which
>> cost about 12G disk space and checkpoint segments use 64, but update
>> operation is in one transaction which lead fast fill up the checkpoint
>> segments and lead do checkpoints frequently, but checkpoints will cost lots
>> resources, so update operation become slowly and slowly and bgwrite won't
>> write because it's not commit yet.
>> Create a new table maybe a quick solution, but it's not appropriated in some
>> cases.
>>       If we can do commit very 1000 row per round, it may resolve the
>> problem.
>
> No, that's utterly unrelated.  Transaction boundaries have nothing to do
> with checkpoints.

True.  But if you update 10000 rows and vacuum you can keep the bloat
to something reasonable.

On another note, I haven't seen anyone suggest adding the appropriate
where clause to keep from updating rows that already match.  Cheap
compared to updating the whole table even if a large chunk aren't a
match.  i.e.

... set col=0 where col <>0;

That should  be the first thing you reach for in this situation, if it can help.

pgsql-performance by date:

Previous
From: "Scott Marlowe"
Date:
Subject: Re: [PERFORM] Re: [PERFORM] 答复: [PERFORM] Postgresql update op is very very slow
Next
From: Matthew Wakeling
Date:
Subject: Re: Hardware vs Software RAID