Re: Improve Seq scan performance - Mailing list pgsql-performance

From Craig Ringer
Subject Re: Improve Seq scan performance
Date
Msg-id 4917F39F.7080007@postnewspapers.com.au
Whole thread Raw
In response to Re: Improve Seq scan performance  ("Vladimir Sitnikov" <sitnikov.vladimir@gmail.com>)
List pgsql-performance
Vladimir Sitnikov wrote:

> Suppose you want to find all the values that contain '%123%'. Currently
> PostgreSQL will do a sec scan, while the better option might be (and it is)
> to loop through all the items in the index (it will cost 30 I/O), find
> records that truly contain %123% (it will find 20 of them) and do 20 I/O to
> check tuple visiblity. That is 50 I/O versus 667 for seq scan.

That does make sense. The 20 visibility checks/tuple reads have a higher
cost than you've accounted for given that they require seeks. Assuming
Pg's random_page_cost assumption is right and that every tuple of
interest is on a different page it'd cost the equivalent of 80
sequential page reads, which still brings the total to only 110.

Anyway, sorry I've bothered you about this. I misunderstood the point
you were at in investigating this and hadn't realised you were very
familiar with Pg and its innards, so I tried to bring up some points
that might help someone who's facing typical issues like "why doesn't it
use an index for %thing%".

> Please, follow the case carefully:  the index is only 30 pages long. Why is
> PostgreSQL doing 2529 I/O? It drives me crazy.

I certainly can't help you there, though I'm interested myself...

--
Craig Ringer

pgsql-performance by date:

Previous
From: Jim 'Decibel!' Nasby
Date:
Subject: Oddity with view
Next
From: Tom Lane
Date:
Subject: Re: Oddity with view