> There *has* to be some overhead, performance wise, in the database
> having to keep track of row-spanning, and being able to reduce that, IMHO,
> is what I see being able to change the blocksize as doing...
If both features were present, I would say to increase the blocksize of
the db to the max possible. This would reduce the number of tuples that
are spanned. Each span would require another tuple fetch, so that could
get expensive with each successive span or if every tuple spanned.
But if we stick with 8k blocksizes, people with tuples between 8 and 16k
would get absolutely killed performance-wise. Would make sense for them
to go to 16k blocks where the reading of the extra bytes per block would
be minimal, if anything, compared to the fetching/processing of the next
span(s) to assemble the whole tuple.
In summary, the capability to span would be the next resort after someone
has maxed out their blocksize. Each OS would have a different blocksize
max...an AIX driver breaks when going past 16k...don't know about others.
I'd say make the blocksize a run-time variable and then do the spanning.
Darren