Re: Performance of PostgreSQL B+-tree algorithm - Mailing list pgsql-general

From Kyle Lanclos
Subject Re: Performance of PostgreSQL B+-tree algorithm
Date
Msg-id 20120514200808.GD10214@monkey.ucolick.org
Whole thread Raw
In response to Re: Performance of PostgreSQL B+-tree algorithm  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Performance of PostgreSQL B+-tree algorithm
List pgsql-general
Tom Lane wrote:
> Well, that would depend on the data type being indexed, which you did
> not specify; and if it's a variable-length type then it's really hard to
> give a concrete answer.

Thanks for the quick reply; I did not appreciate that the Knuth order
would vary according to the data being indexed.

In my specific case, I have an index on (text, double). There are individual
indexes on (text) and (double) that are of some interest, but the main
interest is the two-column index.

The text column in question typically does not contain values longer than
ten characters.

> Basically it's 4 bytes for line pointer, plus 8 bytes for index tuple
> header, plus maxalign'ed size of the index key, divided into page size
> (less a couple dozen bytes for page header).

So, it is the size of the index key that varies depending on the column
type?

> You could increase the result by building with a page size of more than
> the default 8K, though I've seen no recent experiments suggesting that
> doing so is likely to be a win.

I'm thinking it would have to be a very large increase in page size for
it to have an impact. I'm guessing you would also pay a fixed cost
(log (Knuth order)) to traverse a leaf node once you get there. One can
probably produce graphs that show how many records one needs in a database
table before the page size increase starts to make sense.

--Kyle

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Performance of PostgreSQL B+-tree algorithm
Next
From: Tom Lane
Date:
Subject: Re: Performance of PostgreSQL B+-tree algorithm