On Tue, Jul 11, 2017 at 11:08 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
> Amit Kapila wrote:
>
>> Yes, I also think the same idea can be used, in fact, I have mentioned
>> it [1] as soon as you have committed that patch. Do we want to do
>> anything at this stage for PG-10? I don't think we should attempt
>> something this late unless people feel this is a show-stopper issue
>> for usage of hash indexes. If required, I think a separate function
>> can be provided to allow users to perform squeeze operation.
>
> Sorry, I have no idea how critical this squeeze thing is for the
> newfangled hash indexes, so I cannot comment on that. Does this make
> the indexes unusable in some way under some circumstances?
>
It seems so. Basically, in the case of a large number of duplicates,
we hit the maximum number of overflow pages. There is a theoretical
possibility of hitting it but it could also happen that we are not
free the existing unused overflow pages due to which it keeps on
growing and hit the limit. I have requested up thread to verify if
that is happening in this case and I am still waiting for same. The
squeeze operation does free such unused overflow pages after cleaning
them. As this is a costly operation and needs a cleanup lock, so we
currently perform it only during Vacuum and next split from the bucket
which can have redundant overflow pages.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com