Re: Reducing memory usage of insert into select operations? - Mailing list pgsql-general

From Tom Lane
Subject Re: Reducing memory usage of insert into select operations?
Date
Msg-id 9980.1216414512@sss.pgh.pa.us
Whole thread Raw
In response to Re: Reducing memory usage of insert into select operations?  ("Francisco Reyes" <lists@stringsutils.com>)
Responses Re: Reducing memory usage of insert into select operations?
List pgsql-general
"Francisco Reyes" <lists@stringsutils.com> writes:
> On 3:55 pm 07/18/08 Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6
>>> chunks); 10553888816 used
>>
>> Well, that's definitely your problem ...

> So I need to remove the foreign constraints?

Either that or do the update in sections.  But working through umpteen
gig of pending trigger events would take forever anyway --- dropping
and re-adding the FK constraint is almost certainly a better way.

>>> HashBatchContext: 415227952 total in 59 blocks; 6589744
>>> free (5 chunks); 408638208 used
>>
>> although these numbers seem way outta line too.  What did you say you
>> had work_mem set to?

> Initially on the first crash it was 256MB. I believe at the time of the
> crash I got the dump for it was down to 64MB or 8MB.

Something fishy about that.  The max size of a HashBatchContext should
be work_mem, more or less (the accounting isn't perfectly accurate
I think, but it's not off by an order of magnitude).

The only thing I can think of is that you had a huge number of rows with
all the same hash value, so that there wasn't any way to split the batch
into smaller sections.  What are the join keys exactly in this query,
and what can you tell us about their data distributions?

            regards, tom lane

pgsql-general by date:

Previous
From: "Francisco Reyes"
Date:
Subject: Re: Reducing memory usage of insert into select operations?
Next
From: Tom Lane
Date:
Subject: Re: Reducing memory usage of insert into select operations?