Thread: Out of memory error when doing an update with IN clause

Out of memory error when doing an update with IN clause

From
Sean Shanny
Date:
To all,

The facts:

PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI
drives in hardware RAID 0 configuration.  Database size with indexes is
currently 122GB.  Schema for the table in question is at the end of this
email.  The DB has been vacuumed full and analyzed.  Between  8 and 12
million records are added to the table in question each night.  An
analyze on the entire DB is done after the data has been loaded each night.

The command below was run from psql and failed.  When I removed the last
3 elements in the IN clause (98,105,106) it worked fine.(If I only
removed 1 or 2 it still failed) I then ran the same update statement
again with those remaining 3 elements and it completed without any
problems. Trying to figure out why this would happen?  The system was
not out of memory.  Note that I also have run other queries that use the
form:

SELECT x FROM f_commerce_impressions WHERE id IN (SELECT some large
number of elements to match with id), up to 120k tuples in the sub
select, without problems.

Note that I have also posted another out of memory failure on this list
with subject line:

An out of memory error when doing a vacuum full


Thanks.

--sean



update f_commerce_impressions set servlet_key = 60 where servlet_key in
(68,69,70,71,87,90,94,91,98,105,106);
ERROR:  out of memory
DETAIL:  Failed on request of size 1024.




 \d f_commerce_impressions
                 Table "public.f_commerce_impressions"
       Column       |  Type   |               Modifiers
--------------------+---------+----------------------------------------
 id                 | integer | not null
 page_view_key      | integer | not null
 content_key        | integer | not null
 provider_key       | integer | not null
 is_match           | boolean | not null
 date_key           | integer | not null
 time_key           | integer | not null
 area               | text    | not null
 slot               | integer | not null
 cpc                | integer | not null
 servlet_key        | integer | not null
 web_server_name    | text    | not null default 'Not Available'::text
 job_control_number | integer | not null
Indexes:
    "f_commerce_impressions_pkey" primary key, btree (id)
    "idx_commerce_impressions_date_dec_2003" btree (date_key) WHERE
((date_key >= 335) AND (date_key <= 365))
    "idx_commerce_impressions_date_nov_2003" btree (date_key) WHERE
((date_key >= 304) AND (date_key <= 334))
    "idx_commerce_impressions_page_view" btree (page_view_key)
    "idx_commerce_impressions_servlet" btree (servlet_key)



Re: Out of memory error when doing an update with IN clause

From
Tom Lane
Date:
Sean Shanny <shannyconsulting@earthlink.net> writes:
> update f_commerce_impressions set servlet_key = 60 where servlet_key in
> (68,69,70,71,87,90,94,91,98,105,106);
> ERROR:  out of memory

How many rows will this try to update?  Do you have any triggers or
foreign keys in this table?  I'm wondering if the list of pending
trigger events could be the problem ...

            regards, tom lane

Re: Out of memory error when doing an update with IN clause

From
Sean Shanny
Date:
Tom,

There are no FK's or triggers on this or any of the tables in our
warehouse schema.  Also I should have mentioned that this update will
produce 0 rows as these values do not exist in this table.  We have a
dimension table named d_servlet that holds servlet names and id's.
This  table is shared amongst several fact tables including the one in
question.  This update was to ensure that the changes in the d_servlet
table would be reflected in f_commerce_impressions.  It turns out that
the values did not exist in the table.

Here is output from the /usr/local/pgsql/data/servlerlog when this fails:


TopMemoryContext: 40960 total in 4 blocks; 12920 free (25 chunks); 28040
used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 57344 total in 3 blocks; 9000 free (1 chunks); 48344 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 936 free (0 chunks); 88 used
ExecutorState: 24576 total in 2 blocks; 5008 free (8 chunks); 19568 used
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 1040384 total in 7 blocks; 9504 free (1 chunks);
1030880 used
idx_commerce_impressions_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_page_view: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_date_dec_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
idx_commerce_impressions_date_nov_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
f_commerce_impressions_pkey: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_content: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_content: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_description_o_c_o_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_depend_depender_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_depend_reference_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_attrdef_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_servlet: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_session: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_referring_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_date_dec_2003: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_date_nov_2003: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
f_pageviews_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0
chunks); 1280 used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 5976 free (18 chunks); 2216 used
DynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 1984 free (0 chunks); 6208 used
DynaHashTable: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR:  out of memory
DETAIL:  Failed on request of size 1024.

Thanks

--sean





Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>update f_commerce_impressions set servlet_key = 60 where servlet_key in
>>(68,69,70,71,87,90,94,91,98,105,106);
>>ERROR:  out of memory
>>
>>
>
>How many rows will this try to update?  Do you have any triggers or
>foreign keys in this table?  I'm wondering if the list of pending
>trigger events could be the problem ...
>
>            regards, tom lane
>
>
>


Re: Out of memory error when doing an update with IN clause

From
Tom Lane
Date:
Sean Shanny <shannyconsulting@earthlink.net> writes:
> There are no FK's or triggers on this or any of the tables in our
> warehouse schema.  Also I should have mentioned that this update will
> produce 0 rows as these values do not exist in this table.

Hm, that makes no sense at all ...

> Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
> ...
> DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
> 534742296 used

Okay, so here's the problem: this hash table has expanded to 500+Mb which
is enough to overflow your ulimit setting.  Some digging in the source
code shows only two candidates for such a hash table: a tuple hash table
used for grouping/aggregating, which doesn't seem likely for this query,
or a tuple-pointer hash table used for detecting already-visited tuples
in a multiple index scan.

Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
the problem query?  That should tell us which of these possibilities
it is.

            regards, tom lane

Re: Out of memory error when doing an update with IN clause

From
Sean Shanny
Date:
Tom,

As you can see I had to reduce the number of arguments in the IN clause
to even get the explain.

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94);

                                       
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet on
f_commerce_impressions  (cost=0.00..1996704.34 rows=62287970 width=59)
   Index Cond: ((servlet_key = 68) OR (servlet_key = 69) OR (servlet_key
= 70) OR (servlet_key = 71) OR (servlet_key = 87) OR (servlet_key = 90)
OR (servlet_key = 94))
(2 rows)


Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>There are no FK's or triggers on this or any of the tables in our
>>warehouse schema.  Also I should have mentioned that this update will
>>produce 0 rows as these values do not exist in this table.
>>
>>
>
>Hm, that makes no sense at all ...
>
>
>
>>Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
>>...
>>DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
>>534742296 used
>>
>>
>
>Okay, so here's the problem: this hash table has expanded to 500+Mb which
>is enough to overflow your ulimit setting.  Some digging in the source
>code shows only two candidates for such a hash table: a tuple hash table
>used for grouping/aggregating, which doesn't seem likely for this query,
>or a tuple-pointer hash table used for detecting already-visited tuples
>in a multiple index scan.
>
>Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
>the problem query?  That should tell us which of these possibilities
>it is.
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo@postgresql.org so that your
>      message can get through to the mailing list cleanly
>
>
>


Re: Out of memory error when doing an update with IN clause

From
Tom Lane
Date:
Sean Shanny <shannyconsulting@earthlink.net> writes:
> As you can see I had to reduce the number of arguments in the IN clause
> to even get the explain.

You mean you get an out-of-memory error just from EXPLAIN (without
ANALYZE)??  That makes even less sense ... the hash table we identified
before should not be created or filled during EXPLAIN.

            regards, tom lane

Re: Out of memory error when doing an update with IN clause

From
Sean Shanny
Date:
Tom,

I run this:

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
ERROR:  out of memory
DETAIL:  Failed on request of size 1024.


I get this in the server log:

TopMemoryContext: 32768 total in 3 blocks; 6376 free (4 chunks); 26392 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 57344 total in 3 blocks; 28760 free (1 chunks); 28584 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 8192 total in 1 blocks; 3936 free (0 chunks); 4256 used
PortalHeapMemory: 23552 total in 5 blocks; 2888 free (0 chunks); 20664 used
ExecutorState: 24576 total in 2 blocks; 5032 free (8 chunks); 19544 used
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 205344 free (1 chunks);
310752 used
idx_commerce_impressions_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_page_view: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_date_dec_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
idx_commerce_impressions_date_nov_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
f_commerce_impressions_pkey: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0
chunks); 1280 used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 6120 free (0 chunks); 2072 used
DynaHash: 8192 total in 1 blocks; 7064 free (0 chunks); 1128 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 3016 free (0 chunks); 5176 used
DynaHashTable: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR:  out of memory
DETAIL:  Failed on request of size 1024.


Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>As you can see I had to reduce the number of arguments in the IN clause
>>to even get the explain.
>>
>>
>
>You mean you get an out-of-memory error just from EXPLAIN (without
>ANALYZE)??  That makes even less sense ... the hash table we identified
>before should not be created or filled during EXPLAIN.
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 9: the planner will ignore your desire to choose an index scan if your
>      joining column's datatypes do not match
>
>
>


Re: Out of memory error when doing an update with IN clause

From
Tom Lane
Date:
Sean Shanny <shannyconsulting@earthlink.net> writes:
> I run this:

> explain update f_commerce_impressions set servlet_key = 60 where
> servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
> ERROR:  out of memory
> DETAIL:  Failed on request of size 1024.

Well, I have to confess to total bafflement.  AFAICS the overflowing
hash table must be the duplicate-tuple hash table that nodeIndexscan.c
sets up --- but that table shouldn't get any entries loaded into it
if you just do EXPLAIN with no ANALYZE.  Furthermore, it should only
get loaded with entries for tuples that match the WHERE clause, and
you said earlier that there are no rows with these servlet_key values.
The code involved is all new in 7.4, so finding a bug in it wouldn't
surprise me much, but I can't see how this could be happening.

It would help if you could rebuild with --enable-debug (if that wasn't
on already) and get a stack trace from the errfinish() call.  Or, is
there any chance I could get access to your machine and look at the
problem for myself?

            regards, tom lane

Re: Out of memory error when doing an update with IN clause

From
Sean Shanny
Date:
Tom,

We can give you access to the machine Tom.  We need to know what sort of
access you require.  Since I don't have the ability to correspond with
you via email due to the earthlink filter you have on could you send me
another email address privately or a phone number or smoke signal so we
can exchange the info you need?

Thanks.

--sean

Tom Lane wrote:

>Sean Shanny <shannyconsulting@earthlink.net> writes:
>
>
>>I run this:
>>
>>
>
>
>
>>explain update f_commerce_impressions set servlet_key = 60 where
>>servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
>>ERROR:  out of memory
>>DETAIL:  Failed on request of size 1024.
>>
>>
>
>Well, I have to confess to total bafflement.  AFAICS the overflowing
>hash table must be the duplicate-tuple hash table that nodeIndexscan.c
>sets up --- but that table shouldn't get any entries loaded into it
>if you just do EXPLAIN with no ANALYZE.  Furthermore, it should only
>get loaded with entries for tuples that match the WHERE clause, and
>you said earlier that there are no rows with these servlet_key values.
>The code involved is all new in 7.4, so finding a bug in it wouldn't
>surprise me much, but I can't see how this could be happening.
>
>It would help if you could rebuild with --enable-debug (if that wasn't
>on already) and get a stack trace from the errfinish() call.  Or, is
>there any chance I could get access to your machine and look at the
>problem for myself?
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 8: explain analyze is your friend
>
>
>