Ühel kenal päeval, T, 2006-07-11 kell 10:46, kirjutas Josh Berkus:
> Tom,
>
> > Obviously a tree containing many such pages would be awfully inefficient
> > to search, but I think a more common case is that there are a few wide
> > entries in an index of mostly short entries, and so pushing the hard
> > limit up a little would add some flexibility with little performance
> > cost in real-world cases.
> >
> > Have I missed something? Is this worth changing?
>
> Not sure. I don't know that the difference between 2.7K and 3.9K would
> have ever made a difference to me in any real-world case.
One (hopefully) soon-to-be real-world case is index-only queries.
We discussed one approach with Luke and he expressed interest in getting
actually done in not too distant future.
> If we're going to tinker with this code, it would be far more valuable
> to automatically truncate b-tree entries at, say, 1K so that they could
> be efficiently indexed.
That would not work, if we want to get all data from indexes.
Maybe compressing the keys (like we do for TOAST) would be a better
solution.
> Of course, a quick archives search of -SQL, -Newbie and -General would
> indicate how popular of an issue this is.
It may become populat again, when we will be able to do index-only
scans.
--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia
Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com