Re: hash_search and out of memory - Mailing list pgsql-hackers

From Hitoshi Harada
Subject Re: hash_search and out of memory
Date
Msg-id CAP7QgmmtEMPDDka=TViHe-Ej5KZUAoJJrJ2W23Zi=3=cAJ_WpA@mail.gmail.com
Whole thread Raw
In response to Re: hash_search and out of memory  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: hash_search and out of memory  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Thu, Oct 18, 2012 at 8:35 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I wrote:
>> Hitoshi Harada <umi.tanuki@gmail.com> writes:
>>> If OOM happens during expand_table() in hash_search_with_hash_value()
>>> for RelationCacheInsert,
>
> the palloc-based allocator does throw
> errors.  I think that when that was designed, we were thinking that
> palloc-based hash tables would be thrown away anyway after an error,
> but of course that's not true for long-lived tables such as the relcache
> hash table.
>
> I'm not terribly comfortable with trying to use a PG_TRY block to catch
> an OOM error - there are too many ways that could break, and this code
> path is by definition not very testable.  I think moving up the
> expand_table action is probably the best bet.  Will you submit a patch?

Here it is. I factored out the bucket finding code to re-calculate it
after expansion.

Thanks,
--
Hitoshi Harada

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [PATCH] Support for Array ELEMENT Foreign Keys
Next
From: Amit kapila
Date:
Subject: Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]