On Wed, 2007-03-28 at 10:51 -0400, Tom Lane wrote:
> Kenneth Marshall <ktm@rice.edu> writes:
> > On Wed, Mar 28, 2007 at 09:46:30AM -0400, Tom Lane wrote:
> >> Would it? How wide is the "user and token" information?
>
> > Sorry about the waste of time. I just noticed that the proposal is
> > only for rows over 128 bytes. The token definition is:
>
> > CREATE TABLE dspam_token_data (
> > uid smallint,
> > token bigint,
> > spam_hits int,
> > innocent_hits int,
> > last_hit date,
> > );
>
> > which is below the cutoff for the proposal.
More to the point this looks like it has already been optimised to
reduce the row length on a heavily updated table.
> Yeah, this illustrates my concern that the proposal is too narrowly
> focused on a specific benchmark.
Not really. I specifically labelled that recommendation as a discussion
point, so if you don't like the limit, please say so. My reasoning for
having a limit at all is that block contention goes up at least as fast
as the inverse of row length since the best case is when rows are
randomly distributed and updated.
What other aspect of the proposal has anything whatsoever to do with
this single benchmark you think I'm over-fitting to?
You and I discussed this in Toronto, so I'm surprised by your comments.
-- Simon Riggs EnterpriseDB http://www.enterprisedb.com