Tom Lane wrote:
> The way I think it ought to work is that the number of lexemes stored in
> the final pg_statistic entry is statistics_target times a constant
> (perhaps 100). I don't like having it vary depending on tsvector width
I think the existing code puts at most statistics_target elements in a
pg_statistic tuple. In compute_minimal_stats() num_mcv starts with
stats->attr->attstattarget and is adjusted only downwards.
My original thought was to keep that property for tsvectors (i.e. store
at most statistics_target lexemes) and advise people to set it high for
their tsvector columns (e.g. 100x their default).
Also, the existing code decides which elements are worth storing as most
common ones by discarding those that are not frequent enough (that's
where num_mcv can get adjusted downwards). I mimicked that for lexemes
but maybe it just doesn't make sense?
> But in any case, given a target number of lexemes to accumulate,
> I'd suggest pruning with that number as the bucket width (pruning
> distance). Or perhaps use some multiple of the target number, but
> the number itself seems about right.
Fine with me, I'm too tired to do the math now, so I'll take your word
for it :)
Cheers,
Jan
--
Jan Urbanski
GPG key ID: E583D7D2
ouden estin