Re: Constant time insertion into highly non-unique - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: Constant time insertion into highly non-unique
Date
Msg-id 1113500204.16721.1951.camel@localhost.localdomain
Whole thread Raw
In response to Re: Constant time insertion into highly non-unique indexes  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Constant time insertion into highly non-unique  ("Jim C. Nasby" <decibel@decibel.org>)
List pgsql-hackers
On Thu, 2005-04-14 at 12:10 -0400, Tom Lane wrote:
> The first of these should of course force a btree split on the first
> page each time it splits, while the second will involve the
> probabilistic moveright on each split.  But the files will be exactly
> the same size.
> 
> [tgl@rh1 ~]$ time psql -f zdecr10 test
> TRUNCATE TABLE
> 
> real    1m41.681s
> user    0m1.424s
> sys     0m0.957s
> [tgl@rh1 ~]$ time psql -f zsame10 test
> TRUNCATE TABLE
> 
> real    1m40.927s
> user    0m1.409s
> sys     0m0.896s
> [tgl@rh1 ~]$

I think thats conclusive.

> So the theory does work, at least for small index entries.  Currently
> repeating with wider ones ...

I think we should adjust the probability for longer item sizes - many
identifiers can be 32 bytes and there are many people with a non-unique
URL column for example. An average of over 2 blocks/insert at 16 bytes
is still one too many for my liking, though I do understand the need for
the randomness.

I'd suggest a move right probability of 97% (divide by 16) for itemsz >
16 bytes and 94% (divide by 32) when itemsz >= 128

Though I think functional indexes are the way to go there.

Best Regards, Simon Riggs



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Constant time insertion into highly non-unique indexes
Next
From: Greg Stark
Date:
Subject: Re: Interactive docs idea