Re: Out of memory error when doing an update with IN clause - Mailing list pgsql-general

From Sean Shanny
Subject Re: Out of memory error when doing an update with IN clause
Date
Msg-id 3FF07DCC.4070905@earthlink.net
Whole thread Raw
In response to Re: Out of memory error when doing an update with IN clause  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Out of memory error when doing an update with IN clause
List pgsql-general
Tom,

As you can see I had to reduce the number of arguments in the IN clause
to even get the explain.

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94);

                                       
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet on
f_commerce_impressions  (cost=0.00..1996704.34 rows=62287970 width=59)
   Index Cond: ((servlet_key = 68) OR (servlet_key = 69) OR (servlet_key
= 70) OR (servlet_key = 71) OR (servlet_key = 87) OR (servlet_key = 90)
OR (servlet_key = 94))
(2 rows)


Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>There are no FK's or triggers on this or any of the tables in our
>>warehouse schema.  Also I should have mentioned that this update will
>>produce 0 rows as these values do not exist in this table.
>>
>>
>
>Hm, that makes no sense at all ...
>
>
>
>>Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
>>...
>>DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
>>534742296 used
>>
>>
>
>Okay, so here's the problem: this hash table has expanded to 500+Mb which
>is enough to overflow your ulimit setting.  Some digging in the source
>code shows only two candidates for such a hash table: a tuple hash table
>used for grouping/aggregating, which doesn't seem likely for this query,
>or a tuple-pointer hash table used for detecting already-visited tuples
>in a multiple index scan.
>
>Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
>the problem query?  That should tell us which of these possibilities
>it is.
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo@postgresql.org so that your
>      message can get through to the mailing list cleanly
>
>
>


pgsql-general by date:

Previous
From: Sean Shanny
Date:
Subject: Re: An out of memory error when doing a vacuum full
Next
From: "Keith C. Perry"
Date:
Subject: Re: Is my MySQL Gaining ?