Re: Extra cost of "lossy mode" Bitmap Scan plan - Mailing list pgsql-hackers

From higepon
Subject Re: Extra cost of "lossy mode" Bitmap Scan plan
Date
Msg-id f07386410904280132h7a2579f2te260896bc43390dd@mail.gmail.com
Whole thread Raw
In response to Re: Extra cost of "lossy mode" Bitmap Scan plan  (Greg Stark <stark@enterprisedb.com>)
List pgsql-hackers
Hi.

> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.

I see.

> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.

Hmm. Is this really a narrow corner case?
At least I and ITAGAKI-san met.
I think the default work_mem value (1MB) may cause these issues.

Do you think there's no need for the planner to know
whether the plan is lossy or not ?
If the planner could know the costs of lossy mode, it may choose
better plans than now.


Or the second option is to show some hints to people who are doing
performance tuning.

(a) Write trace log when bitmap scans falls into "lossy" mode.

(b) Show "lossy" or not on Explain results.


Best regards,

-----
MINOWA Taro (Higepon)

Cybozu Labs, Inc.

http://www.monaos.org/
http://code.google.com/p/mosh-scheme/

On Tue, Apr 28, 2009 at 4:51 PM, Greg Stark <stark@enterprisedb.com> wrote:
> On Tue, Apr 28, 2009 at 7:45 AM, higepon <higepon@gmail.com> wrote:
>> "How much extra cost should we add for lossy mode?".
>
> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.
>
> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.
>
> If the logic you're suggesting would kick in at all it would be for a
> narrow range of scan sizes, so the net effect would be to use an index
> scan for small scans, then switch to a bitmap scan, then switch back
> to an index scan when the bitmap scan becomes lossy, then switch back
> to a lossy bitmap scan for large scans. I'm thinking that even if it's
> slightly faster when the planner has perfect inputs the downsides of
> switching back and forth might not be worth it. Especially when you
> consider than the planner is often going on approximate estimates and
> it is probably not switching in precisely the right spot.
>
> --
> greg
>


pgsql-hackers by date:

Previous
From: Sam Halliday
Date:
Subject: Re: RFE: Transparent encryption on all fields
Next
From: Heikki Linnakangas
Date:
Subject: Keyword list sanity check