[Charset iso-8859-1 unsupported, filtering to ASCII...]
> > There *has* to be some overhead, performance wise, in the database
> > having to keep track of row-spanning, and being able to reduce that, IMHO,
> > is what I see being able to change the blocksize as doing...
>
> If both features were present, I would say to increase the blocksize of
> the db to the max possible. This would reduce the number of tuples that
> are spanned. Each span would require another tuple fetch, so that could
> get expensive with each successive span or if every tuple spanned.
>
> But if we stick with 8k blocksizes, people with tuples between 8 and 16k
> would get absolutely killed performance-wise. Would make sense for them
> to go to 16k blocks where the reading of the extra bytes per block would
> be minimal, if anything, compared to the fetching/processing of the next
> span(s) to assemble the whole tuple.
>
> In summary, the capability to span would be the next resort after someone
> has maxed out their blocksize. Each OS would have a different blocksize
> max...an AIX driver breaks when going past 16k...don't know about others.
>
> I'd say make the blocksize a run-time variable and then do the spanning.
If we could query to find the file system block size at runtime in a
portable way, that would help us pick the best block size, no?
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)