On Thu, Jan 2, 2014 at 4:27 AM, Andres Freund <andres@2ndquadrant.com> wrote:
> On 2014-01-01 21:15:46 -0500, Robert Haas wrote:
>> [ sensible reasoning ] However, I'm not sure it's really worth it.
>> I think what people really care about is knowing whether the bitmap
>> lossified or not, and generally how much got lossified. The counts of
>> exact and lossy pages are sufficient for that, without anything
>> additional
>
> Showing the amount of memory currently required could tell you how soon
> accurate bitmap scans will turn into lossy scans though. Which is not a
> bad thing to know, some kinds of scans (e.g. tsearch over expression
> indexes, postgis) can get ridiculously slow once lossy.
Hmm, interesting. I have not encountered that myself. If we want
that, I'm tempted to think that we should display statistics for each
bitmap index scan - but I'd be somewhat inclined to see if we could
get by with the values that are already stored in a TIDBitmap rather
than adding new ones - e.g. show npages (the number of exact entries),
nchunks (the number of lossy entries), and maxentries. From that, you
can work out the percentage of available entries that were actually
used. The only thing that's a bit annoying about that is that we'd
probably have to copy those values out of the tid bitmap and into an
executor state node, because the tid bitmap will subsequently get
modified destructively. But I think that's probably OK.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company