Re: Long Running Update - My Solution - Mailing list pgsql-performance

From Robert Klemme
Subject Re: Long Running Update - My Solution
Date
Msg-id BANLkTim++zcMMhQwQ77U4c3_QgXJL4Sapw@mail.gmail.com
Whole thread Raw
In response to Re: Long Running Update - My Solution  (tv@fuzzy.cz)
List pgsql-performance
On Mon, Jun 27, 2011 at 5:37 PM,  <tv@fuzzy.cz> wrote:
>> The mystery remains, for me: why updating 100,000 records could complete
>> in as quickly as 5 seconds, whereas an attempt to update a million
>> records was still running after 25 minutes before we killed it?
>
> Hi, there's a lot of possible causes. Usually this is caused by a plan
> change - imagine for example that you need to sort a table and the amount
> of data just fits into work_mem, so that it can be sorted in memory. If
> you need to perform the same query with 10x the data, you'll have to sort
> the data on disk. Which is way slower, of course.
>
> And there are other such problems ...

I would rather assume it is one of the "other problems", typically
related to handling the TX (e.g. checkpoints, WAL, creating copies of
modified records and adjusting indexes...).

Kind regards

robert


--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

pgsql-performance by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: Performance issue with Insert
Next
From: Tomas Vondra
Date:
Subject: Re: Performance issue with Insert