Oops -- we seem to have a problem with new community logins at the moment, which will hopefully be straightened out soon. You might want to wait a few days if you don't already have a login.
You will need to get a community login (if you don't already have
one), but that is a quick and painless process. Choose an
appropriate topic (like "Performance") and reference the message ID
of the email to which you attached the patch. Don't worry about
the fields for reviewers, committer, or date closed.
Sorry for the administrative overhead, but without it things can
fall through the cracks. You can find an overview of the review
process with links to more detail here:
This patch contains a performance improvement for the fast gin cache. As you may know, the performance of the fast gin cache decreases with its size. Currently, the size of the fast gin cache is tied to work_mem. The size of work_mem can often be quite high. The large size of work_mem is inappropriate for the fast gin cache size. Therefore, we created a separate cache size called gin_fast_limit. This global variable controls the size of the fast gin cache, independently of work_mem. Currently, the default gin_fast_limit is set to 128kB. However, that value could need tweaking. 64kB may work better, but it's hard to say with only my single machine to test on.
On my machine, this patch results in a nice speed up. Our test queries improve from about 0.9 ms to 0.030 ms. Please feel free to use the test case yourself: it should be attached. I can look into additional test cases (tsvectors) if anyone is interested.
In addition to the global limit, we have provided a per-index limit: fast_cache_size. This per-index limit begins at -1, which means that it is disabled. If the user does not specify a per-index limit, the index will simply use the global limit.
I would like to thank Andrew Gierth for all his help on this patch. As this is my first patch he was extremely helpful. The idea for this performance improvement was entirely his. I just did the implementation. Thanks for reading and considering this patch!