On Mon, Jul 8, 2019 at 9:23 AM John Lumby <johnlumby@hotmail.com> wrote:
> Overall, pg-12beta2 yielded a 6.7% reduction in sizes (total pages) of indexes, which was most noticable with the
6non-unique ones.
> In fact the primary-key index was larger with pg-12.
The avg_leaf_density was actually higher for the primary key index, so
it looks like it really came out slightly ahead on v12. Perhaps you
didn't take deleted_pages into account -- there must be free space
that is reusable by the index that has yet to be reused. It would
probably make sense to subtract that across the board.
> Would you have expected better than 6.7%?
I don't think that a test case that runs VACUUM when there are only
4300 deletions and 4300 insertions is particularly realistic, in
general. You might see a larger difference if there was more churn
between each VACUUM run.
> Although a welcome improvement, I think it is not enough to justify stopping use of setting a lower explicit
FILLFACTOR. Which then brings me back to thinking there is a case for the subject of this thread, an automatic way
topreserve density.
I don't think that such an option would make much sense. The "waves of
misery" paper is about smoothing out the frequency of page splits
following bulk loading and a CREATE INDEX. It is not about making
splits occur less often. It's well understood that a certain amount of
free space is the overhead of B-Tree indexes, albeit an overhead that
can be avoided in certain specific instances.
> And one question :
> I notice that in some pg-11 release, a new config parameter appeared :
> vacuum_cleanup_index_scale_factor
> I have not researched this at all and nor did I set it to anything for my pg-12beta2 run, but it sounds as
thoughmaybe it could be relevant to this kind of workload - Is that so?
You seem to be worried about keeping indexes as small as possible.
vacuum_cleanup_index_scale_factor won't help with that.
--
Peter Geoghegan